It's here, ladies and gentlemen - the Khronos Group today announced the release of the OpenGL 4.0 specification at GDC 2010 in San Francisco.
In short, the latest iteration "brings the very latest in cross-platform graphics acceleration and functionality" to PCs and workstations, but if you're looking for a bullet list of geeky details, we have you covered. Some of the benefits include:
two new shader stages that enable the GPU to offload geometry tessellation from the CPU;
per-sample fragment shaders and programmable fragment shader input positions for increased rendering quality and anti-aliasing flexibility;
drawing of data generated by OpenGL, or external APIs such as OpenCL, without CPU intervention;
shader subroutines for significantly increased programming flexibility;
separation of texture state and texture data through the addition of a new object type called sampler objects;
64-bit double precision floating point shader operations and inputs/outputs for increased rendering accuracy and quality;
performance improvements, including instanced geometry shaders, instanced arrays, and a new timer query.
"The release of OpenGL 4.0 is a major step forward in bringing state-of-the-art functionality to cross-platform graphics acceleration, and strengthens OpenGL’s leadership position as the epicenter of 3D graphics on the web, on mobile devices as well as on the desktop," said Barthold Lichtenbelt, OpenGL ARB working group chair and senior manager Core OpenGL at NVIDIA. “NVIDIA is pleased to announce that its upcoming Fermi-based graphics accelerators will fully support OpenGL 4.0 at launch."
So what does this all mean for Joe Gamer? That remains to be seen, and ultimately decided by developers. OpenGL 4.0 has DirectX 11 in its sights, and Khronos has no qualms about saying so. "OpenGL 4.0 exposes the same level of capability of GPUs as DirectX 11," the company said during a presentation at GDC.
What's the over/under on how long the Large Hadron Collider (LHC) stays running? We don't know the answer, but we'd be inclined to take the 'under' bet every time. In the latest bit of bad news, the atom smashing machine will have to be shut down at the end of 2011 for up to year in order to address design issues.
"It's something that, with a lot more resources and with a lot more manpower and quality control, possibly could have been avoided but I have difficulty in thinking that this is something that was a design error," said Dr. Steve Myers, a director of the European Organization for Nuclear Research.
"The standard phrase is that the LHC is its own prototype. We are pushing technologies towards their limits.
"You don't hear about the thousands or hundreds of thousands of other areas that have gone incredibly well. With a machine like LHC, you only build one and you only build it once."
Point well taken, but no less disappointing. Following 14-months of inaction, the machine was only recently restarted, but issues remain. Joints between the machine's magnets need to be strengthened before higher-energy collisions can take place. In the meantime, the decision has been made to run the LHC for 18 to 24 months at half power before pulling the plug for a year to make the necessary improvements. Bummer.
Despite a lingering recession, Microsoft isn't holding back when it comes to spending. According to Kevin Turner, Microsoft's chief operating officer, the Redmond giant will spend around $9.5 billion on research and development this year, which is about $3 billion more than the next closest tech company.
"Especially in light of the tough difficult macroeconomic times that we're coming out of, we chose to really lean in and double down on our innovation," Turner said.
Much of that investment will go towards the cloud, an area Turner sees his company becoming a leader in as it tries to "change and reinvent" itself. Turner also added that Microsoft will still maintain a significant on-premise software business, even as companies such as Google look to cloud-only software solutions.
Forget about traditional touchscreen displays, laser keyboards, and gesture-based controls. None of those have the same wacky sci-fi appeal as "Skinput," the new self-touch input method Carnegie Mellon University and Microsoft are tag teaming.
Skinput is essentially a touchscreen interface for your flesh, but don't worry, it doesn't require any surgery or limb replacements. Instead, a microchip-sized pico projector beams images onto your skin. When you tap on these, the signals get picked up by the special armband equipped with a bio-acoustic sensing array built into it.
"We resolve the location of finger taps in the arm and hand by analyzing mechanical vibrations to propagate through the body," the research team states in their abstract. "We collect these signals using a novel array of sensors worn as an armband.This approach provides an always available, naturally portable, and on-body finger input system."
The armband contains five piezoelectric cantilevers, each one weighted to respond to certain bands of sound frequencies. A different combination of sensors are triggered depending on where you tap yourself.
Samsung on Monday announced what it claims is the industry's first 30nm class DRAM to successfully complete customer evaluations in 2Gb (gigabit) densities.
"Our accelerated development of next generation 30nm-class DRAM should keep us in the most competitive position in the memory market," said Soo-In Cho, president, Memory Division, Samsung Electronics. "Our 30nm-class process technology will provide the most advanced low-power DDR3 available today and therein the most efficient DRAM solutions anywhere for the introduction of consumer electronics and server systems."
According to Samsung, shrinking down to a 30nm manufacturing process allows the company to raise production by 60 percent over 40nm-class DDR3. And as far as consumers are concerned, the company's Green DRAM lowers power consumption by up to 30 percent over 50nm-class DRAM. To give a real world example, Samsung says a 4GB, 30nm module will consume only 3W per hour in a new generation notebook.
IBM on Friday announced it has inked a new agreement with the ABB Group, a global provider of automation technologies, to transform the company's Information Systems (IS) infrastructure across 17 countries in Europe, North America, and Asia Pacific.
"With the new agreement, ABB will realize considerable savings, while harmonizing and optimizing IS infrastructure," said Haider Rashid, ABB's global Chief Information Officer. "Our partnership with IBM allows us to implement new technologies and processes to build for continued globalization of our business. At the same time, we will be improving energy efficiency."
ABB said it expects immediate cost savings as a result of the new agreement, while it also puts ABB in a better position to utilize cloud computing down the line.
Chances are you've heard of graphene transistors before, and that's because the technology's touted as capable of one day replacing silicon. IBM Research has just overcome one of the biggest roadblocks in getting to that point, who claims to have opened a "bandgap" for carbon-based graphene field-effect transistors (FETs),
"Graphene doesn't naturally have a bandgap, which is necessary for most electronic applications," said IBM Fellow Phaedon Avouris. "But now we can report turnable electrical bandgaps of up to 130meV for our bi-layer graphene FETs. And larger bandgaps are certainly feasible."
Avrouis says this latest breakthrough swings the door wide open for the future use of graphen in digital electronics and optoelectronics devices.
VIA this week unveiled what it claims is the first product based on the recently announced Mobile-ITX form factor, the EPIA-T700. It measures just 6cm x 6cm and is intended primarily for medical, military, and in-vehicle applications.
"The VIA EPIA-T700 takes advantage of the modular design principles inherent in our Mobile-ITX form factor specification, making it easier than ever before to create astonishingly compact x86 devices that don't compromise on features," said Daniel Wu, Vice President, VIA Embedded Platform Division, VIA.
VIA says its module can be used with a variety of carrier boards and is fully customizable. At the heart of the EPIA-T700 is a miniaturized 1GHz VIA Eden ULV processor and 512MB of DDR2 on-board memory.
If you haven't seen it yet, the movie Avatar crams a ton of special effects in a futuristic landscape, and it owes some of that magic to a data center nestled in Miramar, New Zealand.
According to Weta Digital, the visual effects company tasked with creating the images in James Cameron's flick, each minute of Avatar consumed some 17.28GB of data.
For that kind of processing power, Weta Digital tapped into a 10,000 sq. ft. server farm filled with 34 racks and over 4,000 Hewlett-Packard blade servers. According to Paul Gunn, the data center's system admin, the computing core includes about 35,000 processors and 104TB of RAM.
After a long wait, the Video Electronics Standards Association, or VESA for short, announced it has finalized DisplayPort v1.2, doubling the data rate of the previous DisplayPort v1.1a standard and paving the way for higher performance 3D stereo display, higher resolutions and color depths, and faster refresh rates.
"DisplayPort v1.2 increases performance by doubling the maximum data transfer rate from 10.8Gbps to 21.6Gbps, greatly increasing display resolution, color depths, refresh rates, and multiple display capabilities," VESA said in its press release.
Other features of the updated spec include multi-streaming, which is the ability to transport multiple independent uncompressed display and audio streams over a single cable, support for high-speed, bi-directional data transfer, support for high-def audio formats, and synchronization assist between audio and video, multiple audio channels, and multiple audio sink devices using Global Time Code (GTC).