Intel is going to need to start dressing up in a tricked out leisure suit with lots of bling and a plumed hat if it keeps pimping SSD technology. On the last day of IDF 2008 Intel wanted to hammer home the reason why hardcore gamers should be interested in its mainstream and Extreme SSDs and it worked to dispel the myths that have sprung up with SSDs.
Chris Saleski from the Storage Technologies Group showed off some pretty spectacular benchmarks with 500 GB, 7200 RPM Seagate Barracuda drives in a RAID array, that were getting just under 550 IOPS versus a single 80GB X25M Mainstream SSD that was posting 44,000+ IOPS. Holy frack! I have to wonder just how accurate that figure is and I’ll keep an eye out for independent verification.
Falcon Northwest’s general manager Bradd Berdelman did another demo. He put a pair of identical FragBoxes together with one containing two of the vaunted 10,000 RPM WD Velociraptors in RAID, and the other FragBox ran an SSD setup. The SSD system turned in 32.65 FPS versus 16.76 FPS for the Velociraptor system.
Intel is preaching to the choir here. System enthusiasts like SSDs and we want to buy them, but when a single modern game can hog 6GB of drive space, we aren’t going to buy them in 80GB sizes for a king’s ransom. Put the products in our hands and if they start turning in those sort performance scores and we see a size increase/price decrease you’ll get us to buy them in droves. No pimping required.
Intel’s CTO, Justin Rattner, delivered a pretty phantasmagoric keynote at the IDF in San Francisco. Invariably most keynotes by tech honchos are about future technologies. But Rattner just didn’t concern himself with the imminent future – the Nehalems and Larrabees - but he allowed his imagination take unbridled flight. He pictured what the world might be like in 2050, where computers would be smarter than us frizzled, frayed Homo sapiens.
“There is speculation that we may be approaching an inflection point where the rate of technology advancements is accelerating at an exponential rate, and machines could even overtake humans in their ability to reason, in the not so distant future,” Rattner said.
Rattner even demonstrated a couple of personal robot prototypes, which employ razor-sharp sensing technologies, though only crude precursors to the “2050 machines”. The first robot – a robotic arm actually - was equipped with electric field pre-touch technology that allows it to sense objects before even touching them. And, just for your knowledge, fish are bestowed with this capability. The second robot is capable of recognizing faces and performing simple tasks as commanded.
We've already had some hands-on time with Bloomfield, Intel's high-end Nehalem part (officially named Core i7). But we know that not everyone's going to make the jump on board this new platform when it's released later this year. Bloomfield pricing hasn't been announced yet, but we expect it to be in the high-end enthusiast range -- ie. only affordable for price un-conscious buyers.
For mainstream system builders, Intel's solution will be Lynnfield, a socket 1160 CPU that'll have its own motherboard configuration. Lynnfield processors will be incompatible with X58 motherboards sporting socket 1366 -- though Intel assured us that they won't phase out the Bloomfield platform once Lynnfield is released in Q1 of next year (unlike what happened with AMD's socket 940 platform). Another difference: Lynnfield's motherboard will run two-channel DDR 3 memory, as opposed to the highly-touted tri-channel setup in Bloomfield.
We were lucky enough to snap up a few spy shots of an early Lynnfield motherboard, shown below:
Can you spot the differences between a Lynnfield and Bloomfield motherboard? Take a closer look after the jump.
It's not often that a technology comes along that significantly changes the way we do things, but we're on the verge of such a transition if Intel succeeds in its latest endeavor, and it has nothing to do with Nehalem. Instead, the chip maker has made progress in a technology that could pave the way for the wireless recharging of electronics.
Intel claims it has found a way to increase the efficiency of a technique for wirelessly powering consumer gadgets and computers, potentially allowing a person to place a device on a computer desk to power it. In short, the technology could do for powering gadgets what Wifi has done for internet access.
"Something like this technology could be embedded in tables and work surfaces, "said Justin Rattner, Intel's CTO, "so as soon as you put down an appropriately equipped device it would immediately begin drawing power.
The technology uses a magnetic field to broadcast up to 60 watts of power two to three feet, while only losing about 25 percent of the power in transmission. And while some start ups have also announced similar wireless charging technologies, those demonstrations have required that the consumer gadgets touch the charging station.
3DFX changed the gaming landscape forever when it brought 3D graphics to the masses, and in a similar fashion, ray tracing technology looks to be the next big revolution on the horizon. The promise of photo realistic scenery has provoked both developers and gamers, but is real-time ray tracing in games anywhere close to being a reality?
In an interview with Tom's Hardware, Intel's Daniel Pohl talked about the API Intel is using to showcase ray tracing demos and what he thinks needs to happen before the technology will be ready for commercial development.
"Creating higher image quality even faster. That requires smart anti-aliasing algorithms, a level of detail mechanism without switching artifacts, particle systems that also work in reflections, a fast soft shadowing algorithm, adoption to upcoming hardware architectures. We have some topics to keep us busy," said Pohl.
In the case of ray tracing, it's a matter of the hardware needing to catch up with the software. Pohl and his team of ray tracing researchers have been "targeting future architectures that consists out of tens, hundreds, and even thousands of cores," noting an almost linear scaling of frame rates with the number of processor cores.
Intel isn't the only one looking to push ray tracing technology into the mainstream, with Nvidia putting on demonstrations of its own. Here's hoping the race to the finish line ends up resembling more of a sprint than a marathon.
How the world turns. Mention overclocking ten years ago at IDF and a Pinkerton would escort you off the show floor to a room where three Intel engineers would beat you with old Pentium Pro motherboards. Today, Intel is actually actively promoting overclocking, but big blue is calling it Turbo Mode.
Turbo Mode is just one of the several groundbreaking features in Nehalem, but it’s also certainly one of the most head-turning. But how exactly does it work and how do you control it? Walk with us as we decode Intel’s Turbo Mode, show you how you’ll set it up in the BIOS (with first photos), and tell you what you should expect from your next heatsink.
Want to take a look at the Nehalem BIOS? Of course you do.
DreamWorks Animation and Intel announced at IDF that, beginning next year, films under the DreamWorks banner will all be in next-gen 3D. Last month, the studio had announced that it was going to replace its AMD hardware with Intel’s future chips with multi-processing cores. Now it has been confirmed that Intel’s upcoming Larrabee (codename) graphics chip will form the crux of the partnership. The two partners even unveiled a 3-D movie image brand called InTru 3D. The technology is also targeted at the video games industry and the internet. AMD has also been touting its Cinema 2.0 tech that it claims will pulverize the wall between movies and games.
During a private briefing with Intel at IDF yesterday to talk about Nehalem, we were given a demo of some cool software in development that makes good use of the multi-threaded cores of the new CPU. Francois Piednoel, the Senior Performance Analyst (ie. benchmarking guru) at Intel describes Deep Viewer as a "science project" of sorts. It's an image sorting application that they acquired from an independent software developer that reminds us of Microsoft Live Labs' Seadragon technology (which is used in the recently released Photosynth online app). We're talking about near-infinite scaling of visual data (in this case photos and videos) being processed in real-time on your display.
Intel adds a few processors and drops a few prices this month in it’s CPU line up. There doesn’t appear to be any shakeups from Intel’s expected plans.
Intel's Core 2 Extreme Quad Core line remains unchanged, but in the standard line, the Q9650 joins the line up at the top, while the Q9550 drops 40% from $530 to the Q9450 previous level of $316. The Q9400 is also new, and enters at the same price as the Q9300 and Q6700 (a 65nm process CPU) at $266.
The only other prices changes were in the Xeon line, with the new X3370 coming out and the X3360 dropping 40% to $316.
All prices are in 1000 tray units.
We will certainly see more changes when Intel ships Bloomfield sometime in Q4.
Tom’s Hardware reports that Intel will demonstrate Hynix’s just announced 16GB 2-rank DDR3 DIMM at this year’s IDF. This comes on the heels of Elpida Memory’s 16GB FB-DIMM in DDR2 flavor that I covered a few weeks ago.
Hynix’s new DDR3 DIMM uses MetaRAM’s DDR3 MetaSDRAM technology letting manufacturers pack four times the amount of mainstream DRAM onto these sticks and still be a drop in solution, using the standard DIMM power and thermal envelope.
Intel will also demonstrate a server with 160GB using Hynix DDR3 R-DIMMs and Meta SDRAM technology in the Advanced Technology Zone.
DDR3 MetaRAM is similar to the previous generation of DDR2 technology that enables significantly more memory in a server. An added benefit of the DDR3 MetaRAM technology is that enables larger memory capacity without negatively impacting the operating frequency of the DDR3 memory channel. It is the only technology that has been demonstrated to run 24GB of DDR3 SDRAM in a channel at 1066 million transactions per-second (MT/s). Using 3 of 16GB DIMM, users can achieve 48GB per channel running at 1066 MT/s, while other competing solutions max out at 16GB per channel at 1066MT/s.
I thought we’d never have machines using Vista’s (Ultimate and Business) 128GB RAM limit in it’s lifetime, but perhaps there is hope! If you have deep pockets you could fill the average 4 slots in an enthusiasts machine with 64GB of RAM. It most likely would be overkill. Wouldn’t it be interesting to see what the performance stats would look like?