geforce en Zotac GeForce GTX 970 AMP! Extreme Unboxing (Video) <!--paging_filter--><h3>Check out Zotac's extremely fancy GeForce GTX 970</h3> <p>Tom's back again with another video, since being on camera he has become drunk with power. This time, he's showing off Zotac's shiny AMP! Extreme Editon of the GTX 970, with boosted clock speeds, big cooling, and even a carbon fiber-esque backplate. This card uses Nvidia's new "Maxwell" architecture, which improves power efficiency and performance, in addition to adding features like Voxel Global Illumination and Multi-Frame Sampled Anti-Aliasing. <a title="Nvidia GeForce GTX 980 review" href="" target="_blank">You can read all about that in our review of the GTX 980</a>, which is the 970's big brother (as its numbering probably indicated).</p> <p><img src="/files/u160416/zotac970.jpg" alt="Zotac GeForce GTX 970 AMP! Extreme Edition" title="Zotac GeForce GTX 970 AMP! Extreme Edition" width="620" height="390" /></p> <p>The AMP! Extreme Edition is very fancy and cost $410 (up from the GTX 970's normal $330 asking-price) . Zotac isn't generally known for high-perfomance variants. MSI has "Lightning," ASUS has "Republic of Gamers," and Sapphire has "Vapor-X," to name a few. After checking out this card, we wonder if Zotac will get an enthusiast spotlight of its own. Check out our video for the details on this guy.</p> <p>.<iframe src="//" width="560" height="315" frameborder="0"></iframe></p> geforce graphics GTX 970 MPCTV nvidia unboxing video Video Card zotac News Wed, 08 Oct 2014 19:26:58 +0000 Tom McNamara 28685 at Nvidia GeForce GTX 980 Review <!--paging_filter--><h3><span style="font-size: 1.17em;">4K and SLI tested on Nvidia's high-end Maxwell card</span></h3> <p>Sometimes things don't go according to plan. Both AMD and Nvidia were supposed to have shifted to 20-nanometer parts by now. In theory, that's supposed to get you lower temperatures, higher clock speeds and quieter operation. Due to circumstances largely out of its control, Nvidia has had to go ahead with a 28nm high-end Maxwell part instead, dubbed GM204. This is not a direct successor to the GTX 780, which has more transistors, texture mapping units, and things like that. The 980 is actually the next step beyond the GTX 680, aka GK104, which was launched in March 2012.</p> <p>Despite that, our testing indicates that the GTX 980 can still be meaningfully faster than the GTX 780 and 780 Ti (and AMD’s Radeon R9 290 and 290X, for that matter, though there are a of couple games better optimized for Radeon hardware). When 20nm processes become available sometime next year, we’ll probably see the actual successor to the GTX 780. But right now, the GTX 980 is here, and comes in at $500. That seems high at first, but recall that the GTX 680, 580, and 480 all launched at this price. And keep in mind that it’s a faster card than the 780 and 780 Ti, which currently cost more. (As we wrote this, AMD announced that it was dropping the base price of the R9 290X from $500 to $450, so that war rages on.) The GTX 970 at $329 may be a better deal, but we have not yet obtained one of those for testing.</p> <p>In other news, Nvidia told us that they were dropping the price of the GTX 760 to $219, and the GTX 780 Ti, 780 and 770 are being officially discontinued. So if you need a second one of those for SLI, now is a good time.</p> <p>Let's take a look at the specs:</p> <div class="spec-table orange"> <table style="width: 620px; height: 265px;" border="0"> <thead></thead> <tbody> <tr> <td></td> <td>GTX 980</td> <td>GTX 970</td> <td>GTX 680</td> <td>GTX 780</td> <td>GTX 780 Ti</td> <td>R9 290X</td> </tr> <tr> <td class="item">Generation</td> <td>GM204</td> <td>GM204&nbsp;</td> <td>GK104&nbsp;</td> <td>GK104&nbsp;</td> <td class="item-dark">GK110</td> <td>&nbsp;Hawaii</td> </tr> <tr> <td>Core Clock (MHz)</td> <td>&nbsp;1126</td> <td>&nbsp;1050</td> <td>&nbsp;1006</td> <td>&nbsp;863</td> <td>876</td> <td>&nbsp;"up to" 1GHz</td> </tr> <tr> <td class="item">Boost Clock (MHz)</td> <td>&nbsp;1216</td> <td>&nbsp;1178</td> <td>&nbsp;1058</td> <td>&nbsp;900</td> <td class="item-dark">928</td> <td>&nbsp;N/A</td> </tr> <tr> <td>VRAM Clock (MHz)</td> <td>&nbsp;7000</td> <td>&nbsp;7000</td> <td>&nbsp;6000</td> <td>&nbsp;6000</td> <td>7000</td> <td>&nbsp;5000</td> </tr> <tr> <td>VRAM Amount</td> <td>&nbsp;4GB</td> <td>&nbsp;4GB</td> <td>&nbsp;2GB/4GB</td> <td>&nbsp;3GB/6GB</td> <td>3GB</td> <td>&nbsp;4GB</td> </tr> <tr> <td>Bus</td> <td>&nbsp;256-bit</td> <td>&nbsp;256-bit</td> <td>&nbsp;256-bit</td> <td>&nbsp;384-bit</td> <td>&nbsp;384-bit</td> <td>&nbsp;512-bit</td> </tr> <tr> <td>ROPs</td> <td>&nbsp;64</td> <td>&nbsp;64</td> <td>&nbsp;32</td> <td>&nbsp;48</td> <td>48</td> <td>&nbsp;64</td> </tr> <tr> <td>TMUs</td> <td>&nbsp;128</td> <td>&nbsp;104</td> <td>&nbsp;128</td> <td>&nbsp;192</td> <td>240</td> <td>&nbsp;176</td> </tr> <tr> <td>Shaders</td> <td>&nbsp;2048</td> <td>&nbsp;1664</td> <td>&nbsp;1536</td> <td>&nbsp;2304</td> <td>2880</td> <td>&nbsp;2816</td> </tr> <tr> <td>SMs</td> <td>&nbsp;16</td> <td>&nbsp;13</td> <td>&nbsp;8</td> <td>&nbsp;12</td> <td>&nbsp;15</td> <td>&nbsp;N/A</td> </tr> <tr> <td>TDP (watts)</td> <td>&nbsp;165</td> <td>&nbsp;145</td> <td>&nbsp;195</td> <td>&nbsp;250</td> <td>&nbsp;250</td> <td>&nbsp;290</td> </tr> <tr> <td>Launch Price</td> <td>&nbsp;$549</td> <td>&nbsp;$329</td> <td>&nbsp;$499</td> <td>&nbsp;$649</td> <td>&nbsp;$699</td> <td>&nbsp;$549</td> </tr> </tbody> </table> </div> <p>On paper, the 980 and 970 don't look like much of a jump from the 680. In fact, the 980 has only 128 shaders (aka "CUDA cores") per streaming multi-processor (SM). Performance tends to increase with a higher number of shaders per SM, so how did the 980 GTX perform so well in our benches, despite having a worse ratio than all the other cards? Well, Nvidia claims that they've improved the performance of each CUDA core by 40%. Provided that this calculation is accurate, the GTX 980 effectively has about as many CUDA cores as a 780 Ti. Add the GTX 980's bigger clock speeds, and performance should be higher.</p> <p><img src="/files/u160416/7g0a0209_620.jpg" width="620" height="349" /></p> <p>You probably also noticed the unusually low price for the GTX 970. The GTX 670 launched at $400 in May 2012, and the GTX 570 launched at $350 in December 2010. These earlier two cards were also had more similar specs compared to their bigger brothers. For example, the GTX 570 had 480 CUDA cores, while the 580 had 512 cores. This is a difference of just 6.25%, although the memory bus was reduced from 384-bits to 320-bits. In contrast, the 970 gets nearly 20% fewer CUDA cores than the 980, though its memory bus remains unchanged. As we said, we haven't gotten a 970 in yet, but, based on its specs, we doubt that we can compensate with overclocking, as we've been able to do in the past with the GTX 670 and 760, and the Radeon R9 290.</p> <p>Nvidia also says that the official boost clock on these new Maxwell cards is not set in stone. We witnessed our cards boosting up to 1,253MHz for extended periods of time (i.e., 20 seconds here, 30 seconds there). When the cards hit their thermal limit of 80 degrees Celsius, they would fall down as low as 1,165Mhz, but we never saw them throttle below the official base clock of 1,126MHz. In SLI, we also noted that the upper card would go up to 84 C. According to Nvidia, these cards have an upper boundary of 95 C, at which point they will throttle below the base clock to avoid going up in smoke. We were not inclined to test that theory, for now.</p> <h4 style="text-align: right;"><a href=",1" target="_blank">Next Page: Voxels, new anti-aliasing, and VR</a></h4> <hr /> <p>The company also says that its delta color compression algorithms have improved bandwidth requirements by about 25 percent on average (it varies from game to game). This extra headroom provides more space for increased frame rates. Since DCC directly affects pixels, this effect should scale with your resolution, becoming increasingly helpful as you crank your res higher.</p> <p>You can also combine these gains with Nvidia’s new Multi-Frame Sampled Anti-Aliasing (MFAA). This technique rotates a pixel’s sampling points from one frame to the next, so that two of these points can simulate the visual results of four sampling points whose locations remain static. The effect starts to shimmer at about 20FPS, whereupon it’s automatically disabled. But when running well, Nvidia claims that it can be 30 percent faster, on average, than the visually equivalent level of Multi-Sample Anti-Aliasing (MSAA). Like TXAA (Temporal Anti-Aliasing), this technique won’t be available on AMD cards (or if it is, it will be built by AMD from the ground up and called something else).</p> <p><img src="/files/u160416/7g0a0238_resize.jpg" width="620" height="349" /></p> <p>Unfortunately, MFAA was not available in the version 344.07 beta drivers given to us for testing, but Nvidia said it would be in the driver after this one. This means that the package will not be complete on launch day. Support will trickle down to the older Kepler cards later on. Nvidia hasn’t been specific about timelines of specific cards, but it sounded like the 750 and 750 Ti (also technically Maxwell cards), will not be invited to this party.</p> <p>Another major upgrade is Voxel Global Illumination, or VXGI. Nvidia positions this as the next step beyond ambient occlusion. With VXGI, light bounces off of surfaces to illuminate nooks and crannies that would otherwise not be lit realistically, in real time. Ordinarily, light does not bounce around in a 3D game engine like it does in meatspace. It simply hits a surface, illuminates it, and that’s the end. Sometimes the lighting effect is just painted onto the texture. So there’s a lot more calculation going on with VXGI.</p> <p><img src="/files/u160416/maxwell_die_620.jpg" width="620" height="349" /></p> <p>But Nvidia has not made specific performance claims because the effect is highly scalable. A developer can choose how many cones of light they want to use, and the degree of bounced light resolution (you can go for diffused/blurry spots of light, or a reflection that’s nearly a mirror image of the bounced surface), and they balance this result against a performance target. Since this is something that has to be coded into the game engine, we won’t see that effect right away by forcing it in the drivers, like Nvidia users can with ambient occlusion.</p> <p>Next is Dynamic Super Resolution (in the 344.11 drivers released today, so we'll be giving this one a peek soon). This tech combines super-sampling with a custom filter. Super sampling takes a higher resolution that your monitor can display and squishes it down. This is a popular form of anti-aliasing, but the performance hit is pretty steep. The 13-tap Gaussian filter that the card lays on top can further smooth out jaggies. It's a post-process effect that's thankfully very light, and you can also scale DSR down from 3840x2160 to 2560x1440. It's our understanding that this effect is only available to owners of the 980 and 970, at least for now, but we'll be checking on that ASAP.</p> <p>Nvidia is also investing more deeply into VR headsets with an initiative called VR Direct. Their main bullet point is a reduction in average latency from 50ms to 25ms, using a combination of code optimization, MFAA, and another new feature called Auto Asynchronous Warp (AAW). This displays frames at 60fps even when performance drops below that. Since each eye is getting an independently rendered scene, your PC effectively needs to maintain 120FPS otherwise, which isn’t going to be common with more demanding games. AAW takes care of the difference. However, we haven’t had the opportunity to test the GTX 980 with VR-enabled games yet.</p> <p>Speaking of which, Nvidia is also introducing another new feature called Auto Stereo. As its name implies, it forces stereoscopic rendering in games that were not built with VR headsets in mind. We look forward to testing VR Direct at a later date.</p> <p>Lastly, we also noticed that GeForce Experience can now record at this resolution. It was previously limited to 2560x1600.</p> <p>Until we get our hands on MFAA and DSR, we have some general benchmarks to tide you over. We tested the GTX 980 in two-way SLI and by itself, at 2560x1600 and 3820x2160. We compared it to roughly equivalent cards that we've also run in solo and two-way configs.</p> <h4 style="text-align: right;"><a href=",2" target="_blank">Next Page: SLI Benchmarks!</a></h4> <hr /> <p>Here's the system that we've been using for all of our recent GPU benchmarks:</p> <div class="spec-table orange" style="text-align: center;"> <table style="width: 620px; height: 265px;" border="0"> <thead></thead> <tbody> <tr> <td><strong>Part</strong></td> <td><strong>Component</strong></td> </tr> <tr> <td class="item">CPU</td> <td class="item-dark">Intel Core i7-3960X (at stock clock speeds; 3.3GHz base, 3.9GHz turbo)</td> </tr> <tr> <td>CPU Cooler</td> <td>Corsair Hydro Series H100</td> </tr> <tr> <td class="item">Mobo</td> <td class="item-dark">Asus Rampage IV Extreme</td> </tr> <tr> <td>RAM</td> <td>4x 4GB G.Skill Ripjaws X, 2133MHz CL9</td> </tr> <tr> <td>Power Supply</td> <td>Thermaltake Toughpower Grand (1,050 watts)</td> </tr> <tr> <td>SSD</td> <td>1TB Crucial M550</td> </tr> <tr> <td>OS</td> <td>Windows 8.1 Update 1</td> </tr> <tr> <td>Case</td> <td>NZXT Phantom 530&nbsp;</td> </tr> </tbody> </table> </div> <p class="MsoNormal" style="text-align: left;"><span style="text-align: center;">Now, let’s take a look at our results at 2560x1600 with 4xMSAA. For reference, this is twice as many pixels as 1920x1080. So gamers playing at 1080p on a similar PC can expect roughly twice the framerate, if they use the same graphical settings. We customarily use the highest preset provided by the game itself; for example, <em>Hitman: Absolution</em> is benchmarked with the “Ultra” setting. 3DMark runs the Firestrike test at 1080p, however. We also enable TressFX in Tomb Raider, and PhysX in Metro: Last Light.</span></p> <div class="spec-table orange"> <table style="width: 620px; height: 265px;" border="0"> <thead></thead> <tbody> <tr> <td></td> <td>GTX 980</td> <td>GTX 680</td> <td>GTX 780</td> <td>GTX 780 Ti</td> <td>R9 290X</td> </tr> <tr> <td class="item">Tomb Raider</td> <td>&nbsp;<strong>33</strong></td> <td>&nbsp;19</td> <td>25</td> <td class="item-dark">&nbsp;27</td> <td>&nbsp;26</td> </tr> <tr> <td>Metro: Last Light</td> <td>&nbsp;<strong>46</strong></td> <td>&nbsp;21</td> <td>&nbsp;22</td> <td>&nbsp;32</td> <td>&nbsp;30</td> </tr> <tr> <td>Batman: Arkham Origins</td> <td>&nbsp;75</td> <td>&nbsp;51</td> <td>&nbsp;65</td> <td>&nbsp;<strong>78</strong></td> <td>&nbsp;65</td> </tr> <tr> <td>Hitman: Absolution</td> <td>&nbsp;42</td> <td>&nbsp;27</td> <td>&nbsp;40</td> <td>&nbsp;45</td> <td>&nbsp;<strong>50</strong></td> </tr> <tr> <td>Unigine Valley</td> <td>&nbsp;45</td> <td>&nbsp;30</td> <td>&nbsp;43</td> <td>&nbsp;<strong>48</strong></td> <td>&nbsp;41</td> </tr> <tr> <td>Unigine Heaven</td> <td>&nbsp;<strong>39</strong></td> <td>&nbsp;64</td> <td>&nbsp;35</td> <td>&nbsp;<strong>39</strong></td> <td>&nbsp;34</td> </tr> <tr> <td>3DMark Firestrike</td> <td>&nbsp;<strong>11,490</strong></td> <td>&nbsp;6,719</td> <td>&nbsp;8,482</td> <td>&nbsp;9,976</td> <td>9,837</td> </tr> </tbody> </table> </div> <p>(best scores <strong>bolded</strong><strong>)</strong></p> <p class="MsoNormal" style="text-align: left;">To synthesize the results into a few sentences, we would say that the 980 is doing very well for its price. It’s not leapfrogging over the 780 and 780 Ti, but Nvidia indicates that it’s not supposed to anyway. It dominates the GTX 680, but that card is also two years old and discontinued, so the difference is not unexpected or likely to change buying habits. The R9 290X, meanwhile, is hitting $430, while the not-much-slower 290 can be had for as little as $340. And you can pick up a 780 Ti for $560. So the GTX 980's price at launch is going to be a bit of a hurdle for Nvidia.</p> <p class="MsoNormal" style="text-align: left;">Performance in Metro: Last Light has also vastly improved. (We run that benchmark with “Advanced PhysX” enabled, indicating that Nvidia has made some optimizations there. Further testing is needed.) Loyal Radeon fans will probably not be swayed to switch camps, at least on the basis of pure performance. Hitman in particular does not appear to favor the Green Team.</p> <p class="MsoNormal" style="text-align: left;">We were fortunate enough to obtain a second GTX 980, so we decided to set them up in SLI, at the same resolution of 2560x1600. Here, the differences are more distinct. We’ve honed the comparison down to the most competitive cards that we have SLI/CF benchmarks for. (Unfortunately, we do not have a second GTX 680 in hand at this time. But judging by its single-card performance, it's very unlikely to suddenly pull ahead.) For this special occasion, we brought in the Radeon R9 295X2, which has two 290X GPUs on one card and has been retailing lately for about a thousand bucks.</p> <div class="spec-table orange"> <table style="width: 620px; height: 265px;" border="0"> <thead></thead> <tbody> <tr> <td></td> <td>GTX 980</td> <td>GTX 780</td> <td>GTX 780 Ti</td> <td>R9 295X2</td> </tr> <tr> <td class="item">Tomb Raider</td> <td>&nbsp;<strong>66</strong></td> <td>&nbsp;45</td> <td>&nbsp;56</td> <td>&nbsp;50</td> </tr> <tr> <td>Metro: Last Light</td> <td>&nbsp;<strong>70</strong></td> <td>&nbsp;52</td> <td>&nbsp;53</td> <td>&nbsp;48</td> </tr> <tr> <td>Batman: Arkham Origins</td> <td>&nbsp;131</td> <td>&nbsp;122</td> <td>&nbsp;<strong>143</strong></td> <td>&nbsp;90</td> </tr> <tr> <td>Hitman: Absolution</td> <td>&nbsp;77</td> <td>&nbsp;74</td> <td>&nbsp;<strong>79</strong></td> <td>&nbsp;79</td> </tr> <tr> <td>Unigine Valley</td> <td>&nbsp;80</td> <td>&nbsp;72</td> <td>&nbsp;<strong>87</strong></td> <td>&nbsp;41</td> </tr> <tr> <td>Unigine Heaven</td> <td>&nbsp;73</td> <td>&nbsp;60</td> <td><strong>&nbsp;77</strong></td> <td>&nbsp;65</td> </tr> <tr> <td>3DMark Firestrike</td> <td>&nbsp;<strong>17,490</strong></td> <td>&nbsp;14,336</td> <td>&nbsp;16,830</td> <td>&nbsp;15,656</td> </tr> </tbody> </table> </div> <p>(best scores <strong>bolded</strong>)</p> <p class="MsoNormal" style="text-align: left;">While a solo 980 GTX is already a respectable competitor for the price, its success is more pronounced when we add a second card—as is the gap between it and the 780 Ti. It still continues to best the GTX 780, getting us over 60 FPS in each game with all visual effects cranked up. That's an ideal threshold. It also looks like Nvidia's claim of 40 percent improved CUDA core performance may not be happening consistently. Future driver releases should reveal if this is a matter of software optimization, or if it's a limitation in hardware. Or just a random cosmic anomaly.</p> <h4 style="text-align: right;"><a href=",3" target="_blank">Next Page: 4K benchmarks and conclusion</a></h4> <hr /> <p class="MsoNormal" style="text-align: left;">So, what happens when we scale up to 3840x2160, also known as “4K”? Here we have almost twice as many pixels as 2560x1600, and four times as many as 1080p. Can the GTX 980’s 256-bit bus really handle this much bandwidth?</p> <div class="spec-table orange"> <table style="width: 620px; height: 265px;" border="0"> <thead></thead> <tbody> <tr> <td></td> <td>GTX 980</td> <td>GTX 680</td> <td>GTX 780</td> <td>GTX 780 Ti</td> <td>R9 290X</td> </tr> <tr> <td class="item">Tomb Raider</td> <td>&nbsp;16</td> <td>&nbsp;8.7*</td> <td>&nbsp;26</td> <td class="item-dark">&nbsp;<strong>28</strong></td> <td>&nbsp;28</td> </tr> <tr> <td>Metro: Last Light</td> <td>&nbsp;<strong>36</strong></td> <td>&nbsp;12</td> <td>&nbsp;18</td> <td>&nbsp;19</td> <td>&nbsp;18</td> </tr> <tr> <td>Batman: Arkham Origins</td> <td>&nbsp;35</td> <td>&nbsp;25</td> <td>&nbsp;33</td> <td>&nbsp;<strong>38</strong></td> <td>&nbsp;38</td> </tr> <tr> <td>Hitman: Absolution</td> <td>&nbsp;20</td> <td>&nbsp;15</td> <td>&nbsp;20</td> <td>&nbsp;24</td> <td><strong>&nbsp;28</strong></td> </tr> <tr> <td>Unigine Valley</td> <td>&nbsp;19</td> <td>&nbsp;15</td> <td>&nbsp;<strong>30</strong></td> <td><strong>&nbsp;30</strong></td> <td>&nbsp;26</td> </tr> <tr> <td>Unigine Heaven</td> <td>&nbsp;19</td> <td>&nbsp;11</td> <td>&nbsp;<strong>23</strong></td> <td><strong>&nbsp;23</strong></td> <td>&nbsp;18</td> </tr> </tbody> </table> </div> <p>(best scores <strong>bolded</strong>)</p> <p>*TressFX disabled</p> <p>The 980 is still scaling well, but the 384-bit 780 and 780 Ti are clearly scaling better, as is the 512-bit 290X. (<strong>Update:</strong>&nbsp;We've re-checked our test results for Hitman: Absolution, and the AMD cards weren't doing nearly as well as we originally thought, though they're still the best option for that particular game. The Batman tests have been re-done as well.) We had to disable TressFX when benchmarking the 680, because the test would crash otherwise, and it was operating at less than 1FPS anyway. At 4K, that card basically meets its match, and almost its maker.</p> <p>Here's 4K SLI/Crossfire. All tests are still conducted at 4xMSAA, which is total overkill at 4K, but we want to see just how hard we can push these cards. (Ironically, we have most of the SLI results for the 290X here, but not for 2560x1600. That's a paddlin'.)</p> <div class="spec-table orange"> <table style="width: 620px; height: 265px;" border="0"> <thead></thead> <tbody> <tr> <td></td> <td>GTX 980</td> <td>GTX 780</td> <td>GTX 780 Ti</td> <td>R9 290X</td> <td>R9 295X2</td> </tr> <tr> <td class="item">Tomb Raider</td> <td>&nbsp;33</td> <td>&nbsp;41</td> <td>&nbsp;44</td> <td class="item-dark">&nbsp;52</td> <td>&nbsp;<strong>53</strong></td> </tr> <tr> <td>Metro: Last Light</td> <td>&nbsp;<strong>43</strong></td> <td>&nbsp;21</td> <td>&nbsp;27</td> <td>&nbsp;29</td> <td>&nbsp;26</td> </tr> <tr> <td>Batman: Arkham Origins</td> <td>&nbsp;<strong>68</strong></td> <td>&nbsp;60</td> <td>&nbsp;65</td> <td>&nbsp;67</td> <td>&nbsp;66</td> </tr> <tr> <td>Hitman: Absolution</td> <td>&nbsp;42</td> <td>&nbsp;40</td> <td>&nbsp;44</td> <td><strong>&nbsp;53</strong></td> <td><strong>&nbsp;</strong><strong>50</strong></td> </tr> <tr> <td>Unigine Valley</td> <td>&nbsp;39</td> <td>&nbsp;<strong>43</strong></td> <td>&nbsp;40</td> <td>&nbsp;24</td> <td>&nbsp;19</td> </tr> <tr> <td>Unigine Heaven</td> <td>&nbsp;34</td> <td>&nbsp;33</td> <td>&nbsp;<strong>44</strong></td> <td>&nbsp;17</td> <td>&nbsp;34</td> </tr> </tbody> </table> </div> <p>(best scores <strong>bolded</strong>)</p> <p>It does appear that the raw memory bandwidth of the 780, 780 Ti, and 290X come in handy at this resolution, despite the optimizations of Maxwell CUDA cores. That Metro: Last Light score remains pretty interesting. It's the only one we run with PhysX enabled (to balance out using TressFX in Tomb Raider). It really does look like Maxwell is much better at PhysX than any other GPU before it. That tech isn't quite common enough to change the game. But if the difference is as good as our testing indicates, more developers may pick it up.</p> <p>Even a blisteringly fast card can be brought down by high noise levels or prodigious heat. Thankfully, this reference cooler is up to the task. Keep in mind that this card draws up to 165 watts, and its cooler is designed to handle cards that go up to 250W. But even with the fan spinning up to nearly 3,000rpm, it’s not unpleasant. With the case side panels on, you can still hear the fan going like crazy, but we didn’t find it distracting. These acoustics only happened in SLI, by the way. Without the primary card sucking in hot air from the card right below it, its fan behaved much more quietly. The GTX 980’s cooling is nothing like the reference design of the Radeon R9 290 or 290X.</p> <p><img src="/files/u160416/key_visual_620.jpg" width="620" height="349" /></p> <p>With a TDP of just 165W, a respectable 650-watt power supply should have no trouble powering two 980 GTXs. Meanwhile, the 290-watt R9 290X really needs a nice 850-watt unit to have some breathing room, and even more power would not be unwelcome.</p> <p>Since MFAA and DSR were not available in the driver that was supplied for testing, there’s more story for us to tell over the coming weeks. (<strong>Update</strong>: DSR settings are actually in this driver, just not in the location that we were expecting.) And we still need to do some testing with VR. But as it stands right now, the GTX 980 is another impressive showing for Nvidia. Its 4K scaling isn't as good as we'd like, especially since Maxwell is currently the only tech that will have Dynamic Super Resolution. If you want to play at that level, it looks like the 290 and 290X are better choices, price-wise, while the overall performance crown at 4K still belongs to the 780 and 780 Ti. But considering the price difference between the 980 and the 780, its similar performance is commendable.</p> <p>For 2560x1600 or lower resolutions, the 980 GTX emerges as a compelling option, but we're not convinced that it's over $100 better than a 290X. Then again, you have MFAA, DSR, and VR Direct, (and the overall GeForce Experience package that's a bit slicker than AMD's Gaming Evolved) which might work some people, or for Nvidia loyalists who've been waiting for an upgrade from their 680 that's not quite as expensive as the 780 or 780 Ti.</p> <p><a href="" target="_blank">Our amigo Wes Fenlon over at PC Gamer has a write-up of his own</a>, so go check it out.</p> 4k 980 GTX benchmarks comparison geforce gpu nvidia performance Review sli Video Card Videocards Fri, 19 Sep 2014 03:04:15 +0000 Tom McNamara 28564 at Nvidia GeForce GTX 980, GTX 970, and GTX 980M Benchmarks Purportedly Leaked <!--paging_filter--><h3><img src="/files/u69/nvidia_card_0.jpg" alt="Nvidia Card" title="Nvidia Card" width="330" height="241" style="float: right;" />Here's a look at how Nvidia's next batch of graphics cards might perform</h3> <p>How about we kick off the work week with some rumors, speculation, and purportedly leaked info, shall we? Sure, why not! What we have tumbling out of the rumor mill today is the notion that Nvidia is going to launch its GeForce 900 Series cards based on its Maxwell architecture on September 19. Specifications are hard to come by, but in the meantime, <strong>some supposed benchmark scores of Nvidia's forthcoming GeForce GTX 980, GTX 970, and GTX 980M are making the rounds in cyberspace</strong>.</p> <p>The folks at <a href="" target="_blank"><em></em></a> posted what they claim are benchmarks of the aforementioned cards, which they then assembled into a neat chart fleshed out with several existing graphics cards. In 3DMark Fire Strike, the GeForce GTX 980 sits pretty high with a score of 13,005 and is only trumped by dual GPU configurations. As a point of reference, the GeForce GTX 780 Ti posted a score of 12.702.There are three different clockspeeds posted for the GTX 980, and that's because <em></em> was unable to confirm which is the actual reference clock. The 13,005 score represents the fastest clocked version (1190MHz core). It's surmised that the card sports 4GB of GDDR5 memory on a 256-bit bus and a 7GHz memory clock.</p> <div>As for the GTX 970, it scored slightly above a GTX 780 (10,282 versus 10,008, respectively).</div> <div>What's most impressive, however, is the purported performance gain of the GTX 980M. In 3DMark Fire Strike, the 980M scored 9,364, about twice as high as the 870M (4,697) and well above the 880M (5,980). <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> </div> Build a PC geforce gpu graphics card GTX 970 GTX 980 GTX 980M Hardware nvidia Video Card News Mon, 08 Sep 2014 19:58:02 +0000 Paul Lilly 28497 at Build it: Real-World 4K Gaming Test Bench <!--paging_filter--><h3>This month, we find out what it takes to run games at 4K, and do so using a sweet open-air test bench</h3> <p>The computer world loves it when specs double from one generation to the next. We’ve gone from 16-bit to 32-bit, and finally 64-bit computing. We had 2GB RAM sticks, then 4GB, then 8GB. With monitor resolutions, 1920x1080 has been the standard for a while, but we never quite doubled it, as 2560x1600 was a half-step, but now that 4K resolution has arrived, it’s effectively been exactly doubled, with the panels released so far being 3840x2160. We know it’s not actually 4,000 pixels, but everyone is still calling it “4K.” Though resolution is doubled over 1080p, it’s the equivalent number of pixels as four 1080p monitors, so it takes a lot of horsepower to play games smoothly. For example, our 2013 Dream Machine used four Nvidia GeForce GTX Titans and a CPU overclocked to 5GHz to handle it. Those cards cost $4,000 altogether though, so it wasn’t a scenario for mere mortals. This month, we wanted to see what 4K gaming is like with more-affordable parts. We also wanted to try a distinctive-looking open test bench from DimasTech. This type of case is perfect for SLI testing, too, since it makes component installation and swapping much quicker.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/beauty_shot_small_29.jpg"><img src="/files/u152332/beauty_shot_small_28.jpg" width="620" height="417" /></a></p> <h3>Triple Threat</h3> <p>Instead of GTX Titans, we’re stepping it down a couple of notches to Nvidia GTX 780s. They provide similar gaming performance, but at half the cost. We’re also using “only” three cards instead of four, so the price difference from Dream Machine to this rig is a whopping $2500 (even more if you count the fact that the Dream Machine cards were water-cooled). These cards still need a lot of bandwidth, though, so we’re sticking with an Intel LGA 2011 motherboard, this time an Asus X79 Deluxe. It’s feature-packed and can overclock a CPU like nobody’s business. The X79 Deluxe is running Intel’s Core i7-4960X CPU, which has six cores and twelve processing threads. It’s kind of a beast. We’re cooling it with a Cooler Master Glacer 240L water cooler, which comes with a 240mm radiator.</p> <p>We’ll also need a boatload of power, so we grabbed a Corsair AX1200 PSU which, as its name suggests, supplies up to 1200 watts. It’s also fully modular, meaning that its cables are all detachable. Since we’re only using one storage device in this build, we can keep a lot of spare cables tucked away in a bag, instead of cluttering up the lower tray.</p> <p>All of this is being assembled on a DimasTech Easy V3 test bench, which is a laser-cut steel, hand-welded beauty made in Italy and painted glossy red. It can handle either a 360mm or 280mm radiator as well, and it comes with an articulating arm to move a case fan around to specific areas. It seems like the ultimate open-air test bench, so we’re eager to see what we can do with it.&nbsp;&nbsp; \</p> <h4>1. Case Working</h4> <p>The DimasTech Easy V3 comes in separate parts, but the bulk of it is an upper and lower tray. You slide the lower one in and secure it with a bundled set of six aluminum screws. The case’s fasteners come in a handy plastic container with a screw-on lid. Shown in the photo are the two chromed power and reset buttons, which are the last pieces to be attached. They have pre-attached hexagonal washers, which can be a bit tricky to remove. We had to use pliers on one of them. You’ll need to wire them up yourself, but there’s a diagram included. Then, connect the other head to the motherboard’s front panel header, which has its own diagram printed on the board.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/a_small_29.jpg"><img src="/files/u152332/a_small_28.jpg" title="Image A" width="620" height="413" /></a></p> <h4>2. Getting Testy</h4> <p>Unfortunately, the Easy V3 does not ship with a 2.5-inch drive bay, nor do standard 3.5-inch to 2.5-inch adapters fit inside the bays. If you want to install a solid-state drive, you need to purchase the correctly sized bay or adapter separately from DimasTech. Since this is an open test bench, which is designed for swapping parts quickly, we chose to just leave the drive unsecured. It has no moving parts, so it doesn’t need to be screwed down or even laid flat to work properly. We also moved the 5.25-inch drive bay from the front to the back, to leave as much room as possible to work with our bundle of PSU cables. The lower tray has a number of pre-drilled holes to customize drive bay placement. Meanwhile, our power supply must be oriented just like this to properly attach to the case’s specified bracket. It’s not bad, though, because this positions the power switch higher up, where it’s less likely to get bumped accidentally.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/b_small_24.jpg"><img src="/files/u152332/b_small_23.jpg" title="Image B" width="620" height="413" /></a></p> <h4>3. Able Cables</h4> <p>The best way to install a modular power supply is to attach your required cables first. This time, we got a kit from Corsair that has individually sleeved wires. It costs $40, and also comes in red, white, or blue. Each of these kits is designed to work with a specific Corsair power supply. They look fancier than the stock un-sleeved cables, and the ones for motherboard and CPU power are a lot more flexible than the stock versions. All of the connectors are keyed, so you can’t accidentally plug them into the wrong socket. We used a few black twist ties to gather in the PCI Express cables.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/c_small_27.jpg"><img src="/files/u152332/c_small_26.jpg" title="Image C" width="620" height="413" /></a></p> <h4>4. Taking a Stand(off)</h4> <p>The Easy V3 comes with an unusually tall set of metal motherboard standoffs. These widgets prevent the motherboard from touching the tray below and possibly creating a short circuit. You can screw these in by hand, optionally tightening them up with a pair of pliers. Once those were in, we actually used some thumbscrews bundled with the case to screw the board down on the standoffs. You can use more standard screws, but we had plenty to spare, and we liked the look. The tall standoffs also work nicely with custom liquid cooling loops, because there is enough clearance to send thick tubing underneath (and we’ve seen lots of photos on the Internet of such setups). For us, it provided enough room to install a right-angle SATA cable and send it through the oval cut-out in the tray and down to the SSD below.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/d_small_23.jpg"><img src="/files/u152332/d_small_22.jpg" title="Image D" width="620" height="413" /></a></p> <p style="text-align: center;">&nbsp;</p> <hr /> <p>&nbsp;</p> <h4>5. Triple Play</h4> <p>This bench has a black bracket that holds your PCIe cards and can be slid parallel to the motherboard to accommodate different board layouts. It will take up to four two-slot cards, and DimasTech sells a longer 10-slot bracket on its website for workstation boards. We had to use the provided aluminum thumbscrews to secure the cards, since all of the screws we had in The Lab were either too coarsely threaded or not the right diameter, which is unusual. Installing cards is easy, because your view of the board slot is not blocked by a case. The video cards will end up sandwiched right next to each other, though, so you’ll need a tool to release the slot-locking mechanism on two of them (we used a PCI slot cover). The upper two cards can get quite toasty, so we moved the bench’s built-in flexible fan arm right in front of their rear intake area, and we told the motherboard to max out its RPM. We saw an immediate FPS boost in our tests, because by default these cards will throttle once they get to about 83 C.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/e_small_21.jpg"><img src="/files/u152332/e_small_20.jpg" title="Image E" width="620" height="413" /></a></p> <h4>6. Cool Under Pressure</h4> <p>Since the Glacer 240L cooler has integrated tubing that’s relatively short, the orientation pictured was our only option. We could have put the fans on the other side of the radiator, but since performance was already superb, we decided we liked the looked of them with the grills on top. To mount the radiator, we used the bundled screws, which became the right length when we added some rubber gaskets, also included.&nbsp; The radiator actually doesn’t give off much heat, even when the CPU is overclocked and firing on all cylinders, so we didn’t have to worry about the nearby power supply fan pulling in a lot of hot intake. In fact, the CPU never crossed 65C in all of our benchmarks, even when overclocked to 4.5GHz. We even threw Prime95 at it, and it didn’t break a sweat. Temperatures are also affected by ambient temperatures, though. With our open-air layout, heat coming out of the GPUs doesn’t get anywhere near the radiator, and The Lab’s air conditioning helps keep temperatures low, so it’s pretty much an ideal environment, short of being installed in a refrigerator. Your mileage may vary.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/f_small_22.jpg"><img src="/files/u152332/f_small_21.jpg" title="Image F" width="620" height="413" /></a></p> <p style="text-align: center;"><a class="thickbox" href="/files/u152332/main_image_small_18.jpg"><img src="/files/u152332/main_image_small_17.jpg" title="Main Image" width="620" height="382" /></a></p> <h3>A Golden Triangle</h3> <p>Despite our penchant for extreme performance, we rarely build triple-GPU systems, so we weren’t sure how well they’d handle 4K, but we figured they’d kick ass. Thankfully, they handled UHD quite well. So well, in fact, that we also tested the system with “only” two GTX 780s and still got respectable gaming performance. For example, with two cards, the Bioshock Infinite benchmark reported an average of a little over 60 FPS on its highest settings. In Tomb Raider, we disabled anti-aliasing and TressFX, maxing out all the other settings, and we still averaged 62 FPS. We benchmarked the opening sequence of Assassin’s Creed 4 with AA and PhysX disabled and everything else maxed out, and we averaged 47 FPS. The Metro: Last Light benchmark, however, averaged 25FPS on max settings, even with PhysX disabled.</p> <p>Unfortunately, we had trouble getting Hitman: Absolution and Metro: Last Light to recognize the third card. This issue is not unheard of, and made us think: If you stick with two GPUs, you no longer need the PCI Express bandwidth of expensive LGA 2011 CPUs, or their equally expensive motherboards, or a huge power supply. That potentially cuts the cost of this system in half, from around $4200 to roughly $2100. You could also save money by going with, say, a Core i7-4930K instead, and a less expensive LGA 2011 motherboard and a smaller SSD. But it’s still a pretty steep climb in price when going from two cards to three.</p> <p>The test bench itself feels sturdy and looks sweet, but we wish that it accepted standard computer-type screws, and that it came with a 2.5-inch drive bay or could at least fit a standard 3.5-to-2.5 adapter. We’d also recommend getting a second articulating fan arm if you’re liquid-cooling, so that one could provide airflow to the voltage regulators around the CPU, and the other could blow directly on your video cards. With the fan aimed at our cards, we instantly gained another 10 FPS in the Tomb Raider benchmark.</p> <p>The Seagate 600 SSD was nice and speedy, although unzipping compressed files seemed to take longer than usual. The X79 Deluxe motherboard gave us no trouble, and the bundled “Asus AI Suite III” software has lots of fine-grained options for performance tuning and monitoring, and it looks nice. Overall, this build was not only successful but educational, too.</p> <div class="module orange-module article-module"><strong><span class="module-name">Benchmarks</span></strong><br /> <div class="spec-table orange"> <table style="width: 627px; height: 270px;" border="0"> <thead> <tr> <th class="head-empty"> </th> <th class="head-light"> <p style="font-size: 10px; font-weight: normal; text-align: start;"><strong>ZERO</strong></p> <p style="font-size: 10px; font-weight: normal; text-align: start;"><strong>POINT</strong></p> </th> <th></th> </tr> </thead> <tbody> <tr> <td class="item">Premiere Pro CS6 (sec)</td> <td class="item-dark">2,000</td> <td><span style="text-align: center;">1,694</span><strong>&nbsp;</strong></td> </tr> <tr> <td>Stitch.Efx 2.0 (sec)</td> <td>831</td> <td><span style="text-align: center;">707</span><strong>&nbsp;</strong></td> </tr> <tr> <td class="item">ProShow Producer 5.0 (sec)</td> <td class="item-dark">1,446</td> <td>1,246</td> </tr> <tr> <td>x264 HD 5.0 (fps)</td> <td>21.1</td> <td>25.6<strong></strong></td> </tr> <tr> <td>Batmans Arkam City (fps)</td> <td>76</td> <td>169<strong></strong></td> </tr> <tr> <td class="item">3DMark11 Extreme</td> <td class="item-dark">5,847&nbsp;</td> <td>12,193</td> </tr> </tbody> </table> </div> </div> <p><span style="font-size: 10px; font-weight: bold;"><em>The zero-point machine compared here consists of a 3.2GHz Core i7-3930K and 16GB of Corsair DDR3/1600 on an Asus P9X79 Deluxe motherboard. It has a GeForce GTX 690, a Corsair Neutron GTX SSD, and 64-bit Windows 7 Professional.</em></span></p> 4k computer gaming pc geforce Hardware maximum pc May issues 2014 nvidia open Test Bench Features Wed, 03 Sep 2014 19:29:01 +0000 Tom McNamara 28364 at Nvidia GeForce 337.88 Driver Now Available to Download <!--paging_filter--><h3><img src="/files/u69/geforce_close.jpg" alt="GeForce Close" title="GeForce Close" width="228" height="152" style="float: right;" />New drivers coincide with Watch Dogs launch</h3> <p><strong>Nvidia on Monday launched new GeForce 337.88 WHQL certified drivers</strong> in preparation for today's release of Ubisoft's much anticipated Watch Dogs title. According to Nvidia, this latest release "ensures you'll have the best possible gaming experience for Watch Dogs." In addition, Nvidia promises performance gains of 10 percent or more in several titles at 2560x1400 and 3840x2160 (4K) resolutions.</p> <p>Some of these include Call of Duty: Ghosts, F1 2013, Hitman Absolution, Sniper v2, DiRT 3, Just Cause 2, Team Fortress 2, Sleeping Dogs, Thief, and a few others.</p> <p>Nvidia also said it made some key DirectX optimizations that should result in lower game loading times and "significant performance increases" in a bunch of titles compared to the previous 335.23 WHQL drivers. You can also expect CPU overhead reductions, which should improve performance across the board.</p> <p>You find out more in the <a href="" target="_blank">Release Notes (PDF)</a> and the grab the updated drivers direct from <a href="" target="_blank">Nvidia</a>.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> 337.88 driver Gaming geforce gpu graphics card nvidia Software Video Card watch dogs News Tue, 27 May 2014 15:38:03 +0000 Paul Lilly 27882 at Nvidia Announces New GeForce Game Bundles with Select GPUs <!--paging_filter--><h3><img src="/files/u69/daylight.jpg" alt="Daylight" title="Daylight" width="228" height="180" style="float: right;" />Score Daylight or $150 of in-game currency</h3> <p>Winter is almost over, though depending on where you live (like the midwest), you might not know it by the amount of snow that sits unmelted outside. Nevertheless, <strong>Nvidia is ready to welcome the spring season by offering two fresh game bundles </strong>with the purchase of select GeForce GTX desktop graphics cards and GeForce GTX-powered laptops. One of the freebie offerings is a downloadable code for Daylight, the first game built around the Unreal Engine 4.</p> <p>To qualify for Daylight, you'll need to purchase a qualifying GPU, among which are the GeForce GTX Titan, 780 Ti, 780, 770, 760, 690, 680, 670, 660 Ti, and 650. If you pick one up from a participating vendor, you'll receive a redemption code, which you can activate beginning April 8, 2014 (the day the game launches).</p> <p>Nvidia's other bundle is $150 ($50 per title) of of in-game currency for three free-to-play games: Warface, Heroes of Newerth, and Path of Exile. The in-game bounty will be given to gamers who purchase a GeForce GTX 650, 650 Ti, 750, or 750 Ti graphics card, or select GTX 700M/800M-based notebooks.</p> <p>You can find a full list of participating vendors by going <a href="" target="_blank">here</a> or <a href="" target="_blank">here</a>.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> games geforce graphics card Hardware nvidia Software Video Card News Wed, 12 Mar 2014 18:34:07 +0000 Paul Lilly 27428 at Nvidia Envisions Longer Lasting Gaming Laptops with 800M Series GPUs <!--paging_filter--><h3><img src="/files/u69/geforce_gtx_880m.jpg" alt="GeForce GTX 880M" title="GeForce GTX 880M" width="228" height="137" style="float: right;" />A top to bottom GPU refresh</h3> <p><strong>Nvidia today splashed the mobile market with more than half a dozen new GPUs</strong> comprising the company's GeForce GTX 800M Series. This is a top to bottom release, meaning the new GPUs range from entry-level (GeForce 830M) all the way up to what Nvidia claims is the fastest mobile graphics chip in the world, the GeForce GTX 880M. The new releases join Nvidia's already available 820M GPU.</p> <p>According to Nvidia, the new GPUs are 30 percent, 40 percent, and even 60 percent (in some cases) faster than its previous generation of mobile GPUs, and none more burly than the 880M. Nvidia's flagship part is based on Kepler (not Maxwell) and features 1,536 CUDA cores and 128 TMUs. It has a core clockspeed of 954MHz and supports up to 4GB of GDDR5 memory clocked at 5,000MHz (effective) on a 256-bit memory bus.</p> <p>This release isn't solely focused on speed, however, as Nvidia points out that its Battery Boost technology is one of several new feature additions. This one in particular is supposed to be able to deliver up to twice the gaming battery life when enabled.</p> <p>"Here’s how it works: instead of your notebook pushing every component to its max, Battery Boost targets a user defined frame rate, such as 30 FPS. The driver level governor takes over from there, and operates all your system components, including CPU, GPU, and memory at peak efficiency, while maintaining a smooth, playable experience," Nvidia explains in a <a href="" target="_blank">blog post</a>.</p> <p>More details on the GeForce 830M, 840M, GTX 850M, GTX 860M, GTX 870M, and GTX 880M are available on <a href="" target="_blank">Nvidia's website</a>.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> 800m Gaming geforce gpu graphics Hardware laptops mobile notebooks nvidia News Wed, 12 Mar 2014 16:56:06 +0000 Paul Lilly 27426 at EVGA GTX 780 SC w/ACX Review <!--paging_filter--><h3><img src="/files/u163784/evga_small.png" alt="EVGA GTX 780 w/ACX" title="EVGA GTX 780 w/ACX" style="float: right;" />As good as it gets</h3> <p>EVGA has unveiled its GeForce <strong><a title="780" href="" target="_blank">GTX 780</a></strong> as well as a new GPU cooling design dubbed <strong>ACX</strong> that it plans to stick on all its high-end GPUs for the foreseeable future. The cooler’s acronym stands for Active Cooling Extreme since it uses active cooling and it’s more extreme than getting a Red Bull enema.&nbsp;</p> <p>Honestly, it’s high time <a title="evga" href="" target="_blank">EVGA</a> came out with this, as it’s been using a slightly modified version of the Nvidia reference “blower” design for way too long, so it’ll now be able to compete with Asus’s DirectCU II, MSI’s Twin Frozr, and Gigabyte’s Windforce designs. EVGA says the new cooler offers a 40 percent increase in heatsink volume, which translates to 15 percent lower temps and totally silent operation. The biggest thing it’s promoting is that the fans use ball bearings instead of the sleeved variety, allowing for longer life and quieter operation. The new heatsink covers the entire card—all 10.5 inches of it—so the VRMs and RAM are also covered by the cooling apparatus. EVGA offers <span style="text-decoration: line-through;">six</span>&nbsp;nine variants of this particular card, and this is its <span style="text-decoration: line-through;">flagship</span> higher-end&nbsp;air-cooled model, the SuperClocked ACX board. There are also FTW And Classified editions that are clocked a bit higher.</p> <p>Compared to the stock design, which has a <a title="titan" href="" target="_blank">Titan</a> cooler by the way, this silver siren features a 104MHz overclock to the base clock, 118MHz overclock to the boost clock, and the aforementioned extreme cooler. It retains the stock card’s 3GB of memory and 6GHz memory clock. That huge-ass fancy cooler only adds $10 to the price of the stock card, too, which is surprising. Sure, we’re used to seeing aftermarket coolers go for $10 or $20 more over stock, but this cooler looks so premium we were we expected it to be more expensive, especially since the card is also overclocked. We should point out that the card’s hardware “bundle” is, well, crappy and small, but we are coming to terms with the state of video card bundles now—which is to say they’re all like this.&nbsp;</p> <p style="text-align: center;"><img src="/files/u154082/gpu10191.jpg" alt="780" title="780" width="620" height="419" /></p> <p>In testing, we saw the ACX-cooled GTX 780 run neck-and-neck with the more expensive GTX Titan, effectively closing the gap between the two cards in a way that just isn’t possible on the GTX 780 reference board, at least not in our testing. This is the first card we’ve ever seen get this close to a Titan, and in the tests where it didn’t graze it, the ACX card matched the Titan, which is damn impressive. It was able to match the Titan in Heaven 4.0, Far Cry 3, Tomb Raider, and Battlefield 3.</p> <p>More good news: We were able to overclock the ACX board a fair bit, eventually getting it up to 1,149MHz boost by nudging the power target slider to 106 percent in the superb PrecisionX software, and GPU offset was set to +59MHz. Under full load, overclocked, the ACX cooler kept the card at a steady 75 C, which is about 10 C cooler than stock.&nbsp;</p> <p style="font-style: normal;">Let's recap then: sexy good looks and blistering benchmarks, cool and quiet performance, overclockable, as fast as a Titan, the best overclocking software around, and only $10 more than a stock board. Sounds like what is basically a perfect video card to us.</p> <p style="font-style: normal; text-align: center;"><em><img src="/files/u163784/evga_benchmarks.png" alt="Benchmarks" title="Benchmarks" width="650" height="490" style="line-height: 15px; text-align: center;" /></em></p> active cooling extreme acx evga geforce gtx 780 kick-ass nvidia Reviews Tue, 11 Feb 2014 23:19:42 +0000 Josh Norem 27246 at Maximum PC Dream Machine Chassis/GTX 780 Ti Giveaway <!--paging_filter--><h3><img src="/files/u160391/newsfeed.jpg" width="250" height="125" style="float: right;" />Win some holiday loot care of Maximum PC</h3> <p>The holidays are just around the corner, and you're hoping to give (or receive) some awesome tech, right? We've got a giveaway that just might make things easier for you on the gift-giving side. Or maybe you wanna win these babies for yourself. We've got our 2013 <strong>Dream Machine chassis</strong> and a <strong>Gigabyte GeForce GTX 780 Ti </strong>video card up for grabs, and we want to give it to one of you!&nbsp;</p> <p>Yes, one winner will make off with this awesome prize package with a combined retail value of $1,099. If you're still unconvinced, check out what we had to say about the <a href="" target="_self">Dream Machine</a> and the <a href="" target="_self">GTX card</a> so you can see what you're getting into. That's one heck of a stocking stuffer, right? If you want to win, head on over to the <a href="" target="_self">official contest page</a> to enter with your full name, email, and zip code starting today at 9 AM PST and ending at 9 AM PST, December 18th. What do you have to lose? <a href="" target="_self">Go forth and win some hardware</a>!</p> contest dream machine geforce news sweepstakes News Sat, 14 Dec 2013 19:26:01 +0000 Brittany Vincent 26888 at GeForce 331.82 Driver Boosts Performance Up to 50 Percent in Several Titles <!--paging_filter--><h3><img src="/files/u69/geforce_titan_2.jpg" alt="GeForce Titan" title="GeForce Titan" width="228" height="179" style="float: right;" />New WHQL-certified GPU drivers from Nvidia now available</h3> <p>Want to speed up your <a href=""><strong>Nvidia</strong></a> graphics card without overclocking? Lucky for you, Nvidia today released new WHQL-certified drivers -- GeForce 331.82 -- that the GPU maker claims will increase performance by up to 50 percent for GeForce 400, 500, 600, and 700 Series graphics cards in "several PC games." One of those titles is Metro: Last Light, though beyond that specific game, Nvidia didn't say which others receive such a generous boost.</p> <p>Other examples Nvidia provided include up to 26 percent better performance in Crysis 3 and up to an 18 percent boost in Battlefield 4 after installing the new driver (versus GeForce 327.23).</p> <p>In addition to performance gains, GeForce 331.82 also introduces new and updated SLI profiles, it enables GameStream technology, and it enables ShadowPlay technology. Other than those tidbits, you can expect the usual round of tweaks that result in better performance and bug fixes.</p> <p>You can grab the <a href="" target="_blank">new driver from Nvidia</a>. It's also bundled with <a href="" target="_blank">GeForce Experience</a> v1.7.1.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> 331.81 driver games geforce gpu Software News Tue, 19 Nov 2013 17:36:13 +0000 Paul Lilly 26723 at Nvidia GeForce GTX 780 Ti Benchmarks <!--paging_filter--><h3>Return of the King</h3> <p><img src="" alt="GTX 780 Ti" title="GTX 780 Ti" width="250" height="166" style="float: right;" />With the GeForce GTX 780 Ti, <a href=""><strong>Nvidia</strong></a> has snatched the single-GPU performance crown back from the clutches of the recently launched Radeon R9 290X, and not just by a small margin either, but by a landslide. By dethroning the R9 290X Nvidia has also taken the GTX Titan to the woodshed as well, as the GTX 780 Ti is far and away the fastest single GPU we have ever tested. Read on to see how it fares against the GTX 780, the R9 290X, and the former champ, the GTX Titan.</p> <h3>The Real Big Kepler</h3> <p>Back when the <a href="">GTX Titan launched</a> we all proclaimed it to be "<a href="">Big Kepler</a>," or the full implementation of the Kepler architecture instead of the half-Kepler GK104 we got with the <a href="">GTX 680</a>. Of course, we all loved the GTX 680 at the time, but it was roughly half the size of the GK110 chip Nvidia had deployed to supercomputers worldwide. When Nvidia finally got around to stuffing the GK110 into a gaming GPU named Titan, we all rejoiced since we had finally acquired the real-deal Holyfield Big Kepler GPU.</p> <p>&nbsp;</p> <p style="text-align: center;"><img src="/files/u302/nvidia_geforce_gtx_780ti_top_small.jpg" alt="780 Ti" title="780 Ti" width="650" height="313" /></p> <p style="text-align: center;"><strong>It's hard to notice in this image, but the cooling shroud has a darker, smoked appearance to match the darker lettering. </strong></p> <p>However, even the Titan wasn't a full GK110 part, as it had one of its SMX units disabled. This begged the question - would Nvidia ever release a Titan Ultra with all SMX units intact? With the GTX 780 Ti we finally have that card. Not only does it have all 15 SMX units enabled, this bad mutha also has the fastest memory available on an Nvidia GPU with its 3GB of 7GHz GDDR5 RAM. Previously, this speed of memory was only found on the mid-range GTX 770. The bottom line is Nvidia is pulling out all the stops with the GTX 780 Ti in an effort to shame the <a href="">R9 290X</a>, and once again establish itself as the king of the single-GPU space. It should be noted that the GTX 780 Ti does not offer Double Precision compute performance like the GTX Titan, so CUDA developers will still prefer that card. The GTX 780 Ti is made for gamers, not scientists. We should also point out that the GTX 780 Ti supports quad-SLI, just like the GTX Titan, and the GTX 780 does not.</p> <h3>GTX 780 Ti Specs</h3> <p>Let's have a look at the specs of the GTX 780 Ti along with its closest competitors.</p> <p style="text-align: center;"><img src="/files/u302/780_ti_specs_2.jpg" alt="GTX 780 Ti Specs" title="GTX 780 Ti Specs" width="467" height="594" /></p> <p style="text-align: center;"><strong>*The R9 290X's TDP isn't a quoted spec from AMD but rather one with air quotes around it. We believe it to be a bit higher than 250w. </strong></p> <p style="text-align: left;">On paper it's clear the GTX 780 Ti has a higher specification than either of its competitors, not to mention the obvious GTX 780. Although its memory bus isn't as wide as the R9 290X's, it has faster memory, so it's able to achieve higher overall memory bandwidth. The R9 290X is capable of pushing 320GB/s thanks to its slower 5GHz memory but wider 512-bit channel, while the GTX 780's faster 7GHz memory can squeeze 336GB/s through its narrower 384-bit bus. The GTX 780 Ti has more processing cores as well, and thanks to Kepler's higher level of efficiency compared to AMD's GCN architecture, is able to sustain much higher clock rates at all times as well. All that adds up to one ass-kicking GPU, as we'll see shortly. Like the GTX 780 the card measures 10.5 inches in length, and requires a six-pin and an eight-pin power connector. TDP is unchanged at 250w.</p> <h3 style="text-align: left;">What's New Compared to the GTX 780</h3> <p style="text-align: center;"><img src="/files/u302/shadow_0.jpg" alt="GTX 780 Ti " title="GTX 780 Ti " width="550" height="312" /></p> <p style="text-align: left;">Since this board carries the GTX 780 moniker, let's look at how it is different from the GTX 780, because remember, this card costs $200 more than the original GTX 780 now that Nvidia has <a href="">lowered its price</a>. First up, it has 25 percent more CUDA cores, going from 2,304 to 2,880, which is quite a jump. Second, it has faster GDDR5 memory, which has been bumped up a full 1GHz to 7GHz. Third, it has a new feature Nvidia calls Max OC that simply balances the power going to the card from its three sources: the six-pin and eight-pin rails, and the PCI Express bus. Nvidia claims the board usually does this on its own quite well, but when overclocking all bets are off and not enough power from one source could limit the overclock. It claims this situation is rectified on the GTX 780 Ti, so you should be able to overclock this board higher than you could a GTX Titan or GTX 780. Finally, though it's not a new feature, this card also supports GPU Boost 2.0, like the other cards in the 700 series. However, with the arrival of the variable clock rate Radeon R9 290X, Nvidia is pointing out that it guarantees a base level of performance on all its 700 series cards, regardless of operating conditions. This is in contrast to the new Hawaii boards from AMD, which state a "max clock speed" but not what the actual average clock speed is under load as it tends to be a bit lower. We'll have more on that a bit later.</p> <h3 style="text-align: left;">G-Sync</h3> <p style="text-align: left;">One of the most interesting features Nvidia has announced recently for its Kepler GPUs is <a href="">G-Sync</a>, which is technology built into upcoming LCDs that enable it to work hand-in-hand with the Kepler GPU to sync refresh rate and frames coming out of the GPU. It's essentially the end of V-sync as we know it, and since most hardcore gamers never use V-sync we couldn't be more thrilled about this technology. By syncing the monitor's refresh rate with the actual framerate coming out of the GPU, tearing and sheering is totally eliminated, resulting in a much smoother visual experience on-screen. There are some caveats, of course. First, we have not tested or witnessed G-Sync in action in our own lab, and have only seen an Nvidia-prepared demo of the tech, but what we've seen so far looks very good, and we have no reason to doubt it won't fulfill its promises once it lands in the lab.</p> <p style="text-align: left;"><img src="/files/u302/gsync-monitor-key-visual_small.jpg" alt="Nvidia G-Sync" title="Nvidia G-Sync" width="650" height="426" /></p> <p style="text-align: center;"><strong>In order to experience Nvidia's G-Sync technology you'll need a G-Sync LCD. The first one from Asus is a $400 24" model.</strong></p> <p style="text-align: left;">However, since we haven't seen it yet as the monitors are not yet available, we'll have to wait to deliver a verdict on this particular piece of gear. Second, in order to acquire this technology you will have to first acquire a G-Sync display, or buy an actual PCB and mod your monitor somehow. We're not sure how that would work, and what monitors will allow it, so again, we'll have to wait and see. We don't believe most gamers will want to buy a new LCD just to get this technology, however. Still, kudos to Nvidia for taking on a problem that has existed for as long as we can remember. If it really is as good as John Carmack and Tim Sweeney say it is, it could revolutionize the gaming industry. We'll have to wait and see.</p> <h3 style="text-align: left;">ShadowPlay</h3> <p style="text-align: center;"><img src="/files/u302/shadowplay_0.jpg" alt="ShadowPlay" title="ShadowPlay" width="650" height="358" /></p> <p style="text-align: center;"><strong>ShadowPlay is more efficient than FRAPs, and doesn't consume your entire hard drive either.</strong></p> <p style="text-align: left;">We covered this technology at the GTX Titan launch, and back then it was "coming soon." Now that it's finally out, though still in beta, this is technology exclusive to Nvidia that should factor into one's purchasing decision. Since we've already covered it, in brief it lets you capture gaming footage with almost no performance penalty, according to Nvidia. Once captured the onboard H.264 encoder built into the Kepler architecture compresses it to reduce file size, and it works in the background always recording what you last did in the game, hence its name. We have been playing with it in the lab, so expect a writeup on our experience with it shortly.</p> <h4 style="text-align: left;"><em>Hit the second page for a discussion of heat, power, overclocking, benchmarks, and our final thoughts.</em></h4> <h4 style="text-align: left;"> <hr /></h4> <h3 style="text-align: left;">Heat, Power, and Overclocking</h3> <p style="text-align: left;">We'll cover the R9 290X "Golden Sample" controversy below, but for now let's focus on the GTX 780 Ti. Like all Kepler cards it runs very cool, and very quiet. Even with its extra cores and faster RAM it is typical to see it hit about 82C under load, and at that temperature it was barely audible in testing. This is the exact same experience we had with the GTX 780 before it, and the GTX Titan as well. These cards run very quiet, and never get too hot. And now that the R9 290X is out, the Nvidia cards seem downright chilly by comparison.</p> <p style="text-align: left;">As far as overclocking is concerned, we've always had a very easy time overclocking Kepler boards, and the GTX 780 Ti was no different. Though Nvidia claims this board overclocks better than the GTX 780 and GTX Titan thanks to its load-balancing tech, we didn't experience that. Instead we achieved results which were just a tad bit lower than what we experienced with boards like the Asus GTX 780 DC2 and EVGA GTX 780 ACX. Overall we were able to hit 1,225MHz boost clock with a 250MHz memory overclock, which is pretty damn good. When overclocked the board hit 85C and had its fan spinning at 67 percent, though it was quieter than the R9 290X fan at 49 percent. Keep in mind we were unable to overclock the Radeon R9 290X since out of the box in its default "quiet" mode it hits 94C quite easily, leaving no headroom for overclocking. Sure, the R9 290X is already running at or around 1,000MHz during normal operation, which is higher than the stated Boost clock for the GTX 780 Ti, but in reality the R9 290X's typical clock speed is more around 950MHz or so. Nvidia would say it's actually around 800MHz, but more on that later.</p> <p style="text-align: left;"><strong>2560x1600 Benchmarks</strong></p> <p style="text-align: left;">Our default resolution for cards of this stature is 2560x1600 with 4XAA enabled, and all details fully maxed out. We play with everything turned up as high as possible, because, well, this is Maximum PC you are reading. Let's examine the numbers:</p> <p style="text-align: center;"><strong>2560x1600 Benchmarks</strong></p> <p style="text-align: center;"><img src="/files/u302/780ti_benches_final_0.jpg" alt="2560x1600 Benchmarks" title="2560x1600 Benchmarks" width="466" height="648" /></p> <p style="text-align: center;"><em><span style="font-size: 10.5pt; line-height: 150%; font-family: &quot;Arial&quot;,&quot;sans-serif&quot;; color: black; border: none windowtext 1.0pt; mso-border-alt: none windowtext 0in; padding: 0in; background: white;">Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus Rampage IV Extreme motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 8. All games are run at max everything at 2560x1600 with 4X AA except for the 3DMark tests.</span></em></p> <p style="text-align: left;">Now then, with the numbers in front of us we can begin to explore the complicated question of where these three cards stand in the current leader boards. We are just kidding, of course, because one look at this chart and one thing is immediately clear. The GTX 780 Ti kicks the crap out of everything, by a lot. We're used to seeing a few frames per second difference between one card and another when comparing cards of the same generation, but the GTX 780 Ti is just in a league all by itself. Nothing else even comes close, not even the mighty Titan, which costs $300 more. Of course, the R9 290X costs $150 less, so there's that to consider, but the end result from these tests is one simple statement -- Nvidia makes the fastest single GPU in the world, period. Unless AMD has a new piece of silicon that is even faster than Hawaii up its sleeve, which would be pretty amazing if it were true, it will be handing the fastest GPU crown back to Nvidia for the time being. We imagine Nvidia will hold onto this title for awhile now too, as AMD can't push the R9 290X any further than it already has. We suppose a water-cooled R9 290X or super-air-cooled version could boost performance a bit, but the best AMD could hope for would be to match Nvidia's card. We doubt it will be able to beat it any time soon.</p> <h3 style="text-align: left;">4K Benchmarks</h3> <p style="text-align: left;">With a card this powerful, you can certainly run most of the latest games at 4K resolution. And if you have the type of cash to spring for a $700 GPU, you might have the $5k or so required to land one of these sexy LCDs on your desk. Our hats are off to you, rich PC gamer, as gaming in 4K is truly breathtaking. Okay, here are the numbers:</p> <p style="text-align: center;"><strong>3840x2160 Benchmarks</strong></p> <p style="text-align: center;"><img src="/files/u302/780ti_4k_0.jpg" alt="4k Benchmarks" title="4k Benchmarks" width="415" height="603" /></p> <p style="text-align: center;"><em><span style="font-size: 10.5pt; line-height: 150%; font-family: &quot;Arial&quot;,&quot;sans-serif&quot;; color: black; border: none windowtext 1.0pt; mso-border-alt: none windowtext 0in; padding: 0in; background: white;">Best scores are bolded. Our test bed is a 3.33GHz Core i7 3960X Extreme Edition in an Asus Rampage IV Extreme motherboard with 16GB of DDR3/1600 and a Thermaltake ToughPower 1,050W PSU. The OS is 64-bit Windows 8. All games are run at 3840x2160 with max everything, AA turned off. We do not have scores for the GTX 780 with Batman as we ran out of time to test it, but will update this chart ASAP.</span></em></p> <p style="text-align: left;">At 4K the GTX 780 performs quite well but not as well as the more expensive Titan, and it also performed slightly worse in Battlefield 3 than the R9 290X. That said, the reviews of the R9 290X and the R9 290 generally showed the AMD cards performing better than their Nvidia counterparts at 4K. As we stated in our review of the R9 290X, AMD sent us a 4K panel in order to highlight this advantage it had over Nvidia, presumably due to their card having higher memory bandwidth and more memory too. However, with the GTX 780 Ti that advantage has largely been wiped out. However, it's worth keeping in mind that the $550 R9 290X performed quite well at 4K against its more expensive competition from Nvidia, so in a way it still holds a slight advantage, at least at this resolution. That's not worth very much in the real world though, as we can't imagine many people are gaming at 4K yet. It's just too expensive at this time, though it's amazing that a single GPU can run the latest games at decent frame rates at this resolution. We are truly living in an amazing time given all the GPU power at our disposal.</p> <h3 style="text-align: left;">A Final Note on Heat, Noise, and Performance</h3> <p style="text-align: left;">A lot of ink has been spilled this week, at least digitally, on the heat, noise, and power consumption of the card that dethroned the GTX 780, the Radeon R9 290X. The reason for all the hub bub is two fold. First, AMD doesn't state a base clock for this GPU like it has done with previous cards. Instead, it states a "maximum clock speed" that the card could reach given enough thermal headroom. Once it reaches the thermal limit, which is exactly 94C on the R9 290X, it begins to throttle the clock speeds a bit to keep temperatures in check. When clock speeds go down, so does performance. Now, if clock speeds just go down a tiny bit, like 50MHz, performance won't suffer that much. However, Nvidia claims that when the R9 290X is set to its default "quiet" mode that clock speeds can go as low as 700MHz, and then stay in that neighborhood until the card cools down, resulting in reduced overall performance.</p> <p style="text-align: left;">In our testing we did not experience a radical decline in clock speeds on the R9 290X. Sure, it fluctuates but generally stays above 900MHz. We even ran some tests to see how much our R9 290X press board would fluctuate, so we let the card get up to 94C and then ran Heaven 4.0 and recorded a score of 33.4 frames per second (we know the chart above shows 36fps). We then let the R9 290X run overnight, which was approximately 16 hours, in order to ensure the card was hot as Hades. We then ran the Heaven 4.0 test again, and the score was 33.6 frames per second, so it did not change over time despite being as hot as possible. We also examined the bar graph showing clock speed changes over that time period, and though there were small dips, it was still pretty consistent. These tests were performed with the card in its stock mode, which is "quiet" as the fan never goes above 40 percent. It's in this mode that you will see the most clock speed fluctuation, as in "Uber" mode with the fan running at about 50 percent, there is very little fluctuation since the card's temps are more under control.</p> <p style="text-align: center;"><a class="thickbox" href="/files/u302/graph_temps_clocks_2.jpg"><img src="/files/u302/graph_temps_clocks_2_0.jpg" width="500" height="265" /></a></p> <p style="text-align: center;"><strong>This screenshot was taken after Heaven 4.0 had been running on the R9 290X for 16 hours. In this image you can see the GPU clock speed over time, the fan set to 40 percent (Quiet mode) and the temp of 94C. That is the R9 290X's standard operating temperature under load. Click the image to see it in full-resolution.</strong></p> <p style="text-align: left;">Here's the rub: Even though the card provided to us by AMD didn't exhibit drastic clock speed fluctuation, other news outlets are reporting that retail boards acquired through e-tailers are showing major fluctuations. This would indicate that the board provided to the press were "golden samples," or boards tested or configured to not exhibit the same behavior seen in retail boards. This is obviously a problem, for several reasons. The boards we receive should be *exactly* the same as retail boards, period. But in this instance something is amiss, either with the press boards or with the retail boards, at least according to sites like the <a href="">Tech Report</a> and <a href=",3659-2.html">Tom's Hardware</a>. AMD says the problem lies with the retail boards, and it's working on a driver fix that will "minimize this variance" according to the statement provided to the Tech Report. For what it's worth, <a href="">a site in Sweden</a> also obtained retail R9 290X boards and found the benchmark scores to be identical to those of the press board. We will be obtaining a retail R9 290X and will post our test results soon.</p> <p style="text-align: left;">To Nvidia's credit, it specs its boards with a Base Clock that is guaranteed, and performance can only go up from there if you overclock. AMD, at least this time around, is doing the opposite by stating the maximum clock speed the card can achieve in ideal conditions, with performance only dropping from there. How much it drops is an area of debate currently, but just to be clear, in our testing we did not experience the drastic clock speed fluctuations reported in the retail cards, and by Nvidia. Even in our overnight test of the R9 290X we did not see a drop in performance.</p> <h3 style="text-align: left;">Final Thoughts</h3> <p>With the release of the GTX 780 Ti Nvidia lays claim to the fastest single GPU in the world title once again. We haven't seen a card dominate the high-end proceedings like this in a while, probably since the GTX Titan was released actually. Not only is it fast, but like the other Kepler cards it's cool and quiet, two traits that have gained new appreciation this week as gamers consider the new Hawaii cards from AMD. Both of those cards represent very strong price-to-performance ratios, but neither of them run hot, and are noticeably louder than their Nvidia equivalents. We don't think the heat and noise are deal breakers, however.</p> <p>Naturally, the GTX 780 Ti costs significantly more than the R9 290X, so we would expect it to outperform it by the same amount, and it certainly does. Barring some unforeseen new GPU from AMD it seems like Nvidia will remain the uncontested fastest GPU provider for the near future, at least until its new Maxwell cards come online sometime in 2014.</p> 780 ti Build a PC geforce gpu graphics card Hardware kepler nvidia reviews Video Card Reviews Videocards Thu, 07 Nov 2013 14:00:50 +0000 josh norem 26648 at Nvidia Announces Nvidia G-Sync, Gamestream, and Much More <!--paging_filter--><h3><img src="/files/u154082/dsc00446.jpg" alt="G-Sync" title="Nvidia G-Sync" width="250" height="141" style="float: right;" />Nvidia The Way It's Meant to be Played 2013 (Day Two)</h3> <p>Day two of <a title="nvidia" href="" target="_blank">Nvidia</a>'s <a title="nvidia editor event day 1 2013" href="" target="_blank">The Way It's Meant to be Played event</a>&nbsp;has come to a close with the green team making a bevy of announcements. The company announced that the <a title="shield" href="" target="_blank">Shield</a> will be able to turn into a quasi-console with a future update, its innovative G-Sync monitor technology, the GeForce GTX 780 Ti, and more.</p> <p>All of the slides and more info are contained in the image gallery below. Take a look and let us know what you thought of Nvidia's announcements in the comments section.</p> 780 ti geforce gpu graphics card gsync john carmack montreal nvidia tim sweeny Video Card Gaming News Features Sat, 19 Oct 2013 17:49:01 +0000 Jimmy Thang 26529 at Nvidia The Way It's Meant to be Played 2013 (Day One) <!--paging_filter--><h3><img src="/files/u154082/dsc00254.jpg" alt="The way it's meant to be played 2013" title="The way it's meant to be played 2013" width="250" height="141" style="float: right;" />Nvidia discusses next-generation graphics, new development software SDKs, and games at its Montreal event</h3> <p>We had the chance to check out <a title="nvidia" href="" target="_blank">Nvidia</a>'s <strong>The Way It's Meant to be Played 2013</strong> event in Montreal, Canada. The two-day editor's event officially kicked off today and centered around the company's new promising game development tools which the green team asserts will usher in truly next-generation graphics. New graphical features like "Flame Works" and "FLEX" were announced along with improvements to existing tools like PhysX and more.&nbsp;</p> <p>Today covered more of the software and game side of development, but Nvidia assures us that day two of the event will focus more on graphics technology. Hopefully we'll hear some new hardware announcements! (A tech geek can only hope!)</p> <p>Until then, you can check out all of today's slides below and let us know what you think of Nvidia's offerings so far in the comments below!</p> 2013 ambient occlusion batman arkham origins day 1 editor's day geforce hbao montreal nvidia physx pictures the way it's meant to be played Gaming News Features Fri, 18 Oct 2013 04:11:38 +0000 Jimmy Thang 26516 at Valve Says Steam Machines Won’t Be Exclusive to Nvidia <!--paging_filter--><h3>Valve confirms Steam boxes will also support Intel and AMD</h3> <p>When <a title="valve" href="" target="_blank">Valve</a> finally unwrapped the specs of the <a title="Steam machine prototypes" href="" target="_blank">300 beta Steam Machines</a> last week, many assumed that <a title="nvidia" href="" target="_blank">Nvidia</a> would have a lock on Valve’s Linux-based gaming machines. Today though, Valve broke cover yet again telling us that Steam Machines would support the three primary graphics vendors today.</p> <p>“Last week, we posted some technical specs of our first wave of Steam Machine prototypes,” said Valve spokesman Doug Lombardi. “Although the graphics hardware that we've selected for the first wave of prototypes is a variety of Nvidia cards, that is not an indication that Steam Machines are Nvidia-only. In 2014, there will be Steam Machines commercially available with graphics hardware made by <a title="amd" href="" target="_blank">AMD</a>, Nvidia, and <a title="Intel" href="" target="_blank">Intel</a>. Valve has worked closely together with all three of these companies on optimizing their hardware for SteamOS, and will continue to do so into the foreseeable future.”</p> <h3 style="text-align: center;"><img src="/files/u154082/steam_machine_2.jpg" alt="steam machine" title="steam machine" width="578" height="323" /></h3> <p>The statement ends speculation that Nvdia had an exclusive on the highly-anticipated gaming platform. The first 300 beta Steam Machines were all based on GeForce based including the Titan, GTX 780, GTX 760 and GTX 660. The machines would also feature mostly Intel Haswell CPUs, hybrid SSHDs, and 450 watt power supplies.</p> amd cpu geforce gpu graphics card intel nvidia Specs steam box Valve Gaming News Wed, 09 Oct 2013 23:45:33 +0000 Gordon Mah Ung 26469 at Nvidia Releases WHQL Certified 327.23 Drivers <!--paging_filter--><h3><img src="/files/u69/gtx_780.jpg" alt="Nvidia GeForce GTX 780" title="Nvidia GeForce GTX 780" width="228" height="164" style="float: right;" />An "essential update for all GeForce GTX users," Nvidia says</h3> <p>This has been a good week for gamers. Intel, AMD, and now <a href=""><strong>Nvidia</strong></a> have all released new graphics drivers, the latter of which is saying its GeForce 327.27 WHQL-certified drivers represent an "essential update" no matter which GeForce GTX GPU you own, as it delivers maximum stability and gets you ready for upcoming games like Batman: Arkham Origins. It's also the first WHQL-certified updated from Nvidia since July.</p> <p>The new drivers offer up to a 19 percent performance boost for GeForce 400 through 700 Series GPUs in several PC games when compared to GeForce 320.49 drivers. Nvidia provided a few examples of its claimed performance gains:</p> <ul> <li>Up to 15 percent in Dirt: Showdown (GeForce GTX 770)</li> <li>Up to 6 percent in Tomb Raider (GeForce GTX 770)</li> <li>UP to 19 percent in Dirt: Showdown (GeForce GTX 770 SLI)</li> <li>Up to 11 percent in F1 2012 (GeForce GTX 770 SLI)</li> </ul> <p>Nvidia's updated drivers also add an SLI profile to both Splinter Cell: Blacklist and Batman: Arkham Origins, enables GeForce to SHIELD streaming, add support for additional tiled 4K displays, add support for 4K FCAT testing, and extend support for tiled 4K features, among a few other changes.</p> <p>You can download the update on <a href="" target="_blank">Nvidia's drivers page</a>.</p> <p><em>Follow Paul on <a href="" target="_blank">Google+</a>, <a href="!/paul_b_lilly" target="_blank">Twitter</a>, and <a href="" target="_blank">Facebook</a></em></p> 327.23 Drivers Gaming geforce gpu nvidia Software News Thu, 19 Sep 2013 15:27:35 +0000 Paul Lilly 26341 at