Raise Your PC IQ! 6 Key Technologies Explained

Alan Fackler

Hard-core geeks that we are, there are few things that get us properly worked up like getting down to the nitty gritty inner workings of our technology. After all, if you know how it works, you're better equipped to handle any malfunctions or problems that pop up. To that end, we include a White Paper in each issue of Maximum PC which provides thoruogh details on how particular technologies work.

In this article, we've grabbed some of our most popular White Papers to explain how AES, HDMI 1.4, WiGig, IPv6, Advanced Format Drive Technology and ARM-9 based processors operate. Since knowing is half the battle, you're now at least halfway more prepared for any battles involving ARM-9 or AES - and you're sure to ace that Tech round of Jeopardy. Let's begin:

Advanced Encryption Standard

How AES secures your data

Any 10-year-old knows how to protect information: Use a secret code. Disk encryption follows a similar process, bending and pulling information into a jumble that appears random. Reverse the steps and the bytes become readable again, restoring your Word doc, JPEG, or any other data into readable form. We’ll explore how disk encryption works and how AES secures your data.

While there is no clear “best” method of encryption, AES (Advanced Encryption Standard) is one of the most prominent. AES is free for any public or private use, including commercial applications. The encryption standard has also held up to great scrutiny, winning an open U.S. government competition to replace DES (Data Encryption Standard). AES originated as the Rijndael method, named after its designers, Vincent Rijme and Joan Daemen. The federal government trusts it so much that its various agencies use it to secure information classified as Top Secret.

Since it’s good enough for the super-secretive National Security Administration—and it’s free!—you’ll find AES within most encryption systems, including the BitLocker and FileVault tools built into Windows 7 and OS X, respectively. You’ll also find it deployed within SSL websites, Wi-Fi networks (WPA2), and other applications. AES keeps your data safe because its details are widely known and tested. Here’s how it works.

AES ABC

Each data encryption step can seem simple on its own, and AES shares many of the same fundamental building blocks, called cryptographic primitives, as other methods. But the specific combinations and AES constants render it unique.

AES first runs its key-expansion step, turning your password into a series of keys. AES uses a symmetric key: Your original password encrypts and decrypts the data. The process is also known as secret key, single key, shared key, and private key. Asymmetric-key encryption, by comparison, relies on different passwords to encrypt and decrypt.

The key-expansion process rotates your input data—simply shifting and transposing it linearly—and then exponentially expands everything through several layers of math. Your AES keys can be 128-, 192-, or 256-bits long—as the size increases, so does the complexity, rendering it harder for hackers to guess their way into your data.

AES, like most encryption schemes, uses a network of substitution and permutation boxes to scramble data in a controlled way. A substitution box translates one point of data into another, such as a basic alphanumeric encryption scheme that turns “A” into “1,” “B” into “2,” and so on. The permutation box shifts that result: “1” could become “2” and “2” could become “3.”

AES uses more complicated, repeating sets of S- and P-box rules so that a small change in any input will greatly change results as the process continues. Its mathematically generated lookup-table and substitution rules are designed to create this ordered chaos. Yet since the rules are public, critics can see that AES designers didn’t plant backdoor access in the process.

Round and Round

Each encryption repetition is called a round. But before the first round occurs, AES applies the AddRoundKey process, merging the first-round key—generated from your password—with bytes of plaintext (unencrypted) data. The parts combine with the “exclusive or” operation (XOR), generating a value of “true” if exactly one of the inputs is true. Those results are passed along.

A full AES round has four steps, beginning with the SubBytes process. This looks up the first and second characters of a byte (in hex) on AES’s substitution box matrix, revealing a new byte from the intersection of the row and column.

Imagine writing those results on a piece of paper wrapped around a cylinder. If you twist the paper and cut it free at a new place, you’ll get a new starting and ending point. The ShiftRows process does this, offsetting a string of four bytes up to three places. For example, “7e, ab, 09, 8d” becomes “ab, 09, 8d, 7e” when shifted one place.

The third step, MixColumns, changes groups of four bytes as columns, multiplying them in certain ways based on a fixed matrix. Each byte gets multiplied four times; each iteration leaves it unchanged, shifts it left, or shifts it left and combines XOR with the prior value.

AddRoundKey makes the final step, combining the output bytes with a new round key through the XOR operation.
The resulting data gets fed back into a new round.

AES repeats these steps a certain number of times depending on the bit-level of the key. 128-bit keys generate 10 rounds, 192-bit keys last for 12 rounds, and 256-bit keys extend to 14 rounds. Since bigger keys involve more number crunching, higher bit rates take longer to encrypt and to decrypt.

When the time comes to reveal your original data, the ciphertext is decrypted back into plaintext by running these steps in reverse.

Boot It

AES—any encryption scheme, really—works to protect specific files and folders, but things get more complicated if you want to encrypt an entire disk. Specifically, if the OS is encrypted, how can you input a password and begin the decryption process?

In that case, you’ll either need a disk or other hardware that can decrypt before you boot, or your tools will begin decryption within the BIOS or boot firmware.

Specific implementations of AES can be flawed, letting an attacker intercept a password, for example. The core method, however, shows little vulnerability to attack, giving it staying power as one of the most prominent encryption methods.

HMDI 1.4

The HDMI Licensing consortium is adding 3D, additional color spaces, higher resolution, Ethernet, and more to this increasingly ubiquitous interface

Hitachi, Panasonic, Philips, Silicon Image, Sony, Technicolor, and Toshiba make up the HDMI Founders, the group that defines the specifications and direction for the digital A/V connector. Since its 2002 launch, the consortium has introduced regular updates. HDMI 1.4 and HDMI 1.4a specify the most recent changes.

HDMI utilizes three physically separate means of communication: Display Data Channel (DDC), Transition Minimized Differential Signaling (TMDS), and Consumer Electronics Control (CEC). DDC provides a path for devices to report their audio and video specifications so they can automatically configure their maximum resolutions. DDC also handles HDCP (High-bandwidth Digital Content Protection) authentication. (For more on HDCP, see here ).

TMDS supplies the main audio, video, and related auxiliary data, which can be encrypted using HDCP. TMDS requires such precise timing that there’s a wire tolerance of just 1/20,000-inch. Bandwidth in the original HDMI spec tops off at 4.9Gb/s for video and up to eight channels of audio.

HDMI 1.4, finalized in May 2009, introduced 3D support, higher resolutions, additional color spaces, Audio Return Channel, Ethernet, and two new connector types. The 1.4a specification, finalized in May 2010, concretizes the structure of 3D content in movies, games, and TV.

3D Takes Shape

HDMI 1.4 defines 3D structures in the way frames are delivered, so that an HDMI 1.4 Blu-ray player from one manufacturer will work with any other manufacturer’s HDMI 1.4 3D TV. (For a 3D primer, see here ).

3D video is often delivered in two channels: one for each eye. Supported 3D structures handle the two streams as interlaced, field-alternative frames with frame packing that stacks the two 720p or 1080p frames vertically; with full- or half-resolution frames side-by-side; in full-resolution with alternating lines; and more.

The 3D structure standards also support 2D+depth and the similar WOWvx format for the experimental Phillips glasses-free 3D TVs. These methods send the left 2D frame and a grayscale depth map that the TV can turn into a stereoscopic pair of frames or 3D effects for glasses-free 3D TVs.


The HDMI Licensing consortium has banned cable manufacturers from labeling their products with HDMI spec numbers. Make sure the cable you buy is labeled "HDMI High Speed" or "HDMI High Speed with Ethernet" if you want 100Mb Ethernet support.

HDMI 1.4 source devices must support at least one of the mandatory formats: 1080p/24 with frame packing for film content, or 720p/60 or 720p/50 with frame packing for gaming content. Display devices must support each mandatory format. Additionally, HDMI 1.4a defines the mandatory broadcast frame formats that displays must support: side-by-side horizontal 1080i/60 or 1080i/50, top-and-bottom 720p/60 or 720p/50, and top-and-bottom 1080p/24.

The display device identifies its 3D capabilities with EDID (extended display identification data), and the source device automatically sends a supported format. Additional metadata defines the video structure and format as InfoFrames sent through the TMDS channel.

While HDMI 1.4 cleanly defines these 3D standards, DVI and certain HDMI 1.3 devices can still send supported 3D structures, making them compatible with an HDMI 1.4 TV. For example, an upcoming PlayStation 3 update will add Blu-ray 3D support, while other companies have sold HDMI 1.3 sources and displays that work together. Nvidia’s 3DTV Play system will connect an Nvidia-powered PC to an HDMI 1.4 3D TV, even if you need to adapt from DVI on your videocard. Otherwise, that setup supports the stacked-frame 3D structure as defined in HDMI 1.4.

Get the Picture

While HDMI 1.3 ramped up bandwidth, HDMI 1.4 takes advantage of room size, supporting 4Kx2K video formats (up to 4096x2160 pixels at 24 frames per second). That’s the same format as many digital movie theaters. Depending on their color gamut, those big streams reach the 10.2Gb/s maximum. And while HDMI 1.3 widened the color gamut, HDMI 1.4 supports digital still camera (DSC) spaces: sYCC601, AdobeRGB, and AdobeYCC601. These let you connect a camera to different displays and maintain consistent color reproduction. The wide color space also allows HDMI 1.4 devices to show a bigger range of hues than before.

The HDMI consortium recommends the use of “High Speed” HDMI cables for these advanced display technologies.

Ethernet Connections

HDMI 1.4 adds full-duplex 100Mb/s Ethernet: Connect one device—even your TV—to your hardwired network and every other HDMI device in the chain becomes a part of your network.

HDMI cables retain the same pin configurations as before; the Ethernet Data Channel is carried on the existing DDC/CEC ground, HPD (hot-plug detect), and Utility pins. Keep in mind that you’ll need a new HDMI cable bearing the “with Ethernet” designation, since this upgraded version shields each wire to eliminate crosstalk.

Audio Return Channel eliminates yet another cable. This lets a TV with a built-in tuner or other A/V device send its audio signal upstream to your receiver over the single, connected HDMI cable. This process is compatible with LipSync, introduced in HDMI 1.3.

Plug In

HDMI 1.4 introduces an additional automotive connector that carries a maximum 720p/1080i signal and omits Ethernet, but includes a locking mechanism that’s more resilient to road conditions. A Micro connector, also new to HDMI 1.4, is about the same size as Micro USB. It supports the same features and is equipped with a full complement of 19 pins, just like the standard and mini versions.

One final note: The HDMI Licensing consortium has mandated that manufacturers move away from using HDMI version numbers and categories in their packaging and marketing materials in favor of HDMI names. If you have older HDMI 1.3 cables, anything identified as “Category 1” is the same thing as a newer “HDMI Standard” cable, and those labeled “Category 2” are the same as newer “HDMI High Speed” cables. Essentially, the only reason to upgrade from an HDMI 1.3 Category 2 cable is to gain Ethernet and the HDMI Audio Return Channel.

WiGig

One-gigabit-per-second data rates are nigh, thanks to the Wireless Gigabit Alliance's emerging 802.11ad standard. Here's how it works

It’s no secret that current wireless networking standards are imperfect. The no-cable approach is hyper-convenient, and certainly Wi-Fi in its various recent incarnations—802.11g and its theoretical 54Mb/s throughput, and 802.11n and its theoretical 300Mb/s data rates—is sufficient for moving data, streaming compressed media, and playing online games. But compare it to Ethernet (or a well-structured Power-line or MoCA network, for that matter), and Wi-Fi simply wilts—especially when you introduce such pesky barriers as walls, appliances, and human beings to the environment. And if you’re thinking of streaming HD video without wires, you’d best prepare for a shuddering trickle rather than a raging torrent.

Thankfully, a new sheriff is on his way into town. He hasn’t officially been appointed yet, but his gun is in its holster. And what a gun it is, promising theoretical peak throughput as high as seven giga bits per second (7Gb/s) and a real-world bare minimum of 1Gb/s. That’s more than 10 times the throughput of current wireless solutions.

It’s called WiGig, and the big reason it’s so special is that it taps into the 60GHz frequency band.

Gigahertz, Gigabits, Path Loss

The radios in today’s Wi-Fi routers operate on one of two unlicensed frequency bands: 2.4GHz or 5GHz (dual-band routers have one of each).

The 2.4GHz band has relatively few channels to work with and is extremely crowded with existing 802.11b/g/n networks—not to mention the energy emitted by some cordless phones, baby monitors, microwave ovens, and a host of other devices. The 5GHz band is both less crowded and has more channels, but it’s not much wider than 2.4GHz and as utilized by the 802.11n standard remains limited to a maximum theoretical throughput of 300Mb/s.

The 60GHz frequency band is an entirely different animal. In 2001, the FCC allocated seven contiguous gigahertz of spectrum (from 57GHz to 64GHz) to unlicensed wireless communications. Prior to the FCC’s move, only 0.3GHz of bandwidth was available for that purpose.


Radios that operate at higher frequencies deliver more bandwidth, but radio waves on very high frequencies are more readily absorbed by walls and other physical barriers. By this time next year, we expect to see tri-band wireless routers with radios operating on the 2.4-, 5.0-, and 60GHz frequency bands. The 60GHz radios, however, will likely depend on line of sight.


The 60GHz band is cool for a number of other reasons. First, one of its apparent drawbacks—extremely limited range—is actually a benefit. You see, 60GHz is far more susceptible than other frequencies to a phenomenon called path loss. A 60GHz electromagnetic wave suffers a significant reduction in power density as it propagates through space, preventing signals from traveling nearly as far as, say, those in the 2.4GHz or 5GHz frequency.

Yet susceptibility to higher path loss also makes it an ideal solution for short-range, at-home use. According to RF Globalnet, a trade publication for the radio frequency/microwave design industry, 100,000 systems operating at 60GHz can be located within 10 square kilometers without interference hassles. Indeed, the 60GHz wireless LAN range is so short that the WiGig Alliance promotes it as an in-room rather than whole-house solution.

Under the Hood

So what makes WiGig so powerful over the short haul? A low probability of interference certainly helps, as does that 7GHz chunk of available bandwidth, which in turn enables much higher data rates. As WiGig Alliance board member Bruce Montag explains, “There is more unencumbered spectrum available in the 60GHz band, which enables higher spectrum utilization and multi-gigabit throughput.”

Moreover, WiGig devices will pack a serious punch. According to WiGig president Dr. Ali Sadri, WiGig will compensate for path loss by utilizing high-gain antennas at both the transmitter and receiver stages to increase the “link margin”—the difference between the receiver’s sensitivity and the actual received power—and the total EIRP (effective isotropic radiated power), which is the measurement of antenna signal strength. But raw power isn’t everything. In order for both transmitter and receiver to maximize the amount of gain, they need to “learn” training algorithms that help determine the most favorable direction for waveform arrival and departure. This is accomplished though something called “adaptive beamforming,” a process that increases link margin by dynamically directing signals to the best route.

Given the shorter range of WiGig, Sadri and his organization maintain that it isn’t a competitor for existing Wi-Fi. Instead, it’s envisioned as an enhancement to existing Wi-Fi architecture. Sadri envisions a scenario where dual- and tri-band devices utilize the 60GHz frequencies for high-bandwidth line-of-sight networking and seamlessly fall back to legacy (2.4GHz or 5GHz) Wi-Fi for longer-range transmission. Sadri also envisions homes outfitted with signal reflectors mounted on the walls that will bounce WiGig signals from room to room. Even the Power-line and MoCA camps are greeting WiGig with open arms, noting that their technologies may be bridges between WiGig-equipped zones.

What's Next?

Of course, industry reactions may be a bow to the potential inevitability of WiGig. It’s backed by a veritable who’s who of the tech world, including industry giants such as Microsoft, Nokia, Intel, and Broadcom. The WiGig Alliance plans to submit its first-draft standard to the IEEE committee responsible for developing the latest wireless networking standard, known as IEEE 802.11ad, but that doesn’t mean it’s willing to wait. The WiGig Alliance has formed a strategic partnership with the Wi-Fi Alliance (the organization that certifies all 802.11a/b/g/n products for interoperability) and will likely produce a similar logo program if the IEEE doesn’t act quickly enough.

When will you be able to buy a WiGig device? Our sources tell us sometime between Q4 2010 and Q1 2011, although some proprietary point-to-point 60GHz products are available today.

Internal Protocol Version 6

How the next-generation Internet protocol, IPv6, will save the day by increasing the number of available addresses and adding new features

Are you prepared for the next Y2K? We’ve dubbed it “The Great IP Address Shortage of 2011—or Shortly Thereafter.” While “TGIPAS2011oST” doesn’t roll off the tongue as easily as “Y2K,” this creeping calamity could cause fundamental networking problems the day someone claims the very last public IPv4 (Internet Protocol version 4) address.

Fortunately, a solution is already here: IPv6 (Internet Protocol version 6), will create trillions (times trillions times trillions) more public addresses while introducing new networking features. Both IP technologies will coexist during the transition, so in many cases, IPv6 devices will switch to an IPv4 mode to communicate with old gear.

We’ll explain the basics of how Internet layers shuffle information across worldwide networks. Then we’ll discuss the changes and improvements in IPv6; it’s about more than adding addresses. With these details, you’ll be able to prepare for the dual-stack, hybrid near-future and the eventual transition to IPv6 exclusivity.

Peeling Back the Layers

The Internet uses four main layers to transmit data. You might think of them as nesting matryoshka dolls, with each layer in the progression encapsulating the next. (You can read more about this topic here .)

The Link Layer includes the hardware and software (or firmware) to handle the lowest level of communications, including Media Access Control (MAC). (While some define hardware connections as part of the Physical Layer below this, we’ll side with those who combine the two.) The Internet Layer rides on top of that, consisting largely of IPv4 or IPv6. The Transport Layer transmits and delivers data to specific application protocols residing in the Application Layer, including HTTP (Hypertext Transfer Protocol) and FTP (File Transfer Protocol).

The Internet Layer is especially important since you’ll sometimes set its addressing while configuring network equipment, such as a router. More often, the other layers are invisible to end-users; things just work.

Change of Address

A new addressing method marks the biggest change between IPv4 and IPv6. IPv4 technology uses 32-bit addresses that allow about 4 billion possible public nodes. Due to growing demand and inefficient allocation of existing IP addresses, the planet has consumed nearly all of these. Estimates vary as to exactly when we’ll run out of IPv4 addresses, but most experts anticipate it happening soon.

IPv6 solves the problem in a big way. It uses a 128-bit addressing scheme that creates 2128 possible combinations: roughly 34 trillion trillion trillion (34 followed by 37 zeroes) IP addresses—enough to support every Internet connected computer, phone, television, gaming console, refrigerator, and toaster humanity could ever want. Here’s an example of what an IPv6 address looks like: 2001:0DB8:AC10:2F3B:9C5A:FFFF:3FFE:02AA.


Since IPv4 and IPv6 will coexist for a time, IPv6 nodes will communicate with each other via tunnels through IPv4-only infrastructures using so-called "6to4" technology. This will enable devices (hosts, routers, etc.) with IPv6 address to reach IPv6 infrastructures without requiring a connection to the IPv6 Internet.


And that’s not all that IPv6 will provide. It also eliminates the need for Network Address Translation (NAT), a form of IP masquerading developed to get around the IP address shortage by hiding entire address spaces. The primary drawback to NAT is that a host residing behind a NAT-enabled router does not have end-to-end connectivity and so cannot utilize some Internet protocols.

Network devices connected to an IPv6 network will be able to auto-configure much more robustly than is possible with the existing Dynamic Host Configuration Protocol (DHCP). You can also simply reconfigure an existing network, keeping the address suffix—and subnet—while changing the public prefix. And since an IPv6 subnet can have 264 addresses, ISPs and large institutions will no longer be forced to fragment their networks.

As with IPv4, data exchanged using IPv6 is contained inside packets. IPv6 packets consist of three elements: a fixed header with addressing information, an optional extension header that enables additional features, and a payload. IPv6 packets can be processed faster than IPv4 packets because their packet headers do not contain a checksum. Checksums are used in IPv4 to verify that data has been properly sent and received, but this task is performed at a higher layer with IPv6. In addition, some infrequently used processes have been moved out of the fixed header and into the optional header.

Countdown to IPv6

All modern operating systems are ready for the transition to IPv6, including Windows XP with Service Pack 3. For the moment, operating systems can run in a dual-stack mode, juggling the two IP standards and often two IP addresses at the same time. For example, they might use IPv6 internally and route to an IPv4 destination externally.

Operating systems also support “6to4” translation. This method allows IPv6 devices and networks to communicate across IPv4 sections of the Internet. The technique stores an IPv6 packet inside the payload of an IPv4 packet, like a ferry transports cars across a river. Relay servers or a dual-stack destination on the other side can unlock the IPv6 data once clear of the IPv4 network. As long as you have a public IPv4 address, the technique should work even if your ISP hasn’t yet adopted IPv6.

It’s harder for IPv4 devices to communicate with IPv6. Translational gateways and proxies can help but they aren’t reliable. Once your favorite Internet services switch over to IPv6-only, your IPv4-only equipment might not work. Thankfully, the dual-stack transitional phase should stave off that problem for many years.

Advanced Format Drive Technology

How hard drive makers are taking the next step toward more reliable and higher-capacity hard drives--and why you should care

Solid state drives (SSDs) might very well be the future of storage, but mechanical hard drives aren’t going the way of the dodo anytime soon. In fact, there’s a major shift taking place in the underlying architecture of HDDs, one that will result in greater reliability and, eventually, higher capacities.

Essentially a new formatting standard proposed by the International Disk Drive Equipment and Materials Association (IDEMA), this long overdue change represents the biggest format shift in three decades, according to hard drive makers. Every hard drive maker has committed to making the transition by early 2011, and Western Digital has already embraced the standard, which the company is calling Advanced Format. But what exactly does this format shift entail?

Put simply, Advanced Format technology changes how data is stored. Most of today’s hard drives store information in 512-byte chunks called sectors, a scheme that made sense when HDDs were measured in megabytes. But as capacities have ballooned in size, so too has the need for much larger sectors. The next generation of hard drives will address this issue by significantly increasing the size of the sectors on the media to store  4,096 bytes (4K) of data, and as you might have heard, that could be a problem for Windows XP users (we’ll tell you why in a moment).

Why the Need for Change?

One reason why hard drive makers originally decided to format hard drives into blocks of 512 bytes is because that was the standard sector size of floppy disks. But now that HDDs are orders of magnitude more capacious and have entered the terabyte era, Advanced Format technology paves the way for much better efficiency.

To understand why, we need to look at how sectors are composed. Each 512-byte sector is made up of three parts: a Sync/DAM block header responsible for data addressing, error correction code (ECC) tasked with maintaining the integrity of the data within the sector, and a tiny inter-sector gap between each block. Spread out over a 2TB hard drive, this arrangement breaks down into almost 4 billion sectors. That’s a lot of overhead to contend with, and by expanding the sector size to 4KB—equivalent to an eightfold increase—hard drive makers are able to remove a large number of Sync/DAM blocks, inter-sector gaps, and ECC blocks. Think of it as trimming the fat.


Switching to 4K sectors solves two fundamental problems intrinsic to legacy 512-byte sectors. First, it allows hard drive makers to drastically reduce the number of Sync/DAM and inter-sector gaps, resulting in much less wasted space. And second, it allows for longer ECC blocks, better equipped to catch and eradicate data errors.
So, what’s the big deal? Larger hard drives, for one. Western Digital tells us that switching to 4K sectors translates into approximately 7 to 11 percent more disk capacity, though don’t expect to suddenly gain additional space by installing an Advanced Format drive. As WD explains it, “the increase in disk space is not realized or gained in the drives today, but is the next move to larger capacity drives.”

Put another way, hard drives are quickly reaching the threshold where it doesn’t make sense to add more capacity. That’s because the bigger the hard drive, the more important ECC becomes to ensure your data stays error-free. Larger hard drives require a lot more space for ECC, and any small gains in capacity at this point end up going almost entirely to ECC, leaving very little additional space for user data. But by reducing the number of blocks, less overhead is needed to maintain data integrity. To give you an idea of what we’re talking about, it takes about 100 bytes of ECC data for a 4K sector compared to 320 bytes for eight 512B sectors. That’s a pretty significant savings when you’re talking about multiple terabytes.

It should also be noted that larger sectors makes it possible to develop more efficient ECC schemes with longer algorithms. Most errors have a tendency to come clumped together in “bursts,” and according to Western Digital, one of the biggest benefits of Advanced Format is that burst error correction is improved 50 percent through the use of larger ECC code blocks.

Potential Pitfalls

If you’re looking for a caveat, here it is. Users of older operating systems—primarily Windows XP—can expect a somewhat bumpy transition to Advanced Format. You shouldn’t have any problem running an Advanced Format drive on Windows Vista, Windows 7, any flavor of Mac OS X from Tiger on up, and all Linux kernels released after September 2009. Windows XP, on the other hand, was developed before hard drive makers decided on using 4K sectors.

To ensure backward compatibility, Western Digital has implemented an emulation scheme that maps eight 512B logical sectors into a single 4K physical sector. This is how all OSes will see the drive, and while WD claims the benefits remain, XP users will see a performance hit by as much as 10 percent. That’s because before Vista, Microsoft set the default partition to start at sector 63, a number that isn’t divisible by 8. Using eight-sector clusters in XP tends to result in misaligned partitions, causing data to overlap across two 4K sectors instead of one.

Western Digital thought of this, too, and has made available a software utility designed to correct alignment issues. In some cases, such as a clean install, you may need to apply a jumper across certain pins, and if you’re using a cloning utility, you’ll need to run WD’s alignment app even in Vista and Windows 7. If this all sounds overwhelming, don’t fret—Western Digital has put together a handy chart detailing what you need to do for different configurations.

ARM-Based Processors

Intel likes to talk up its Atom line of CPUs these days. Mostly found in netbooks, the latest version of Atom is an SoC—system on chip. While Atom runs at speeds up to 1.8GHz, Intel mostly talks about its power efficiency.

Developers who build SoCs around ARM ( www.arm.com ) processor cores laugh at this—they’ve been building low-power system on chips for years. And with the latest Cortex A9 cores, ARM processors are arguably more powerful than any Atom, albeit not compatible with x86 code.

An SoC integrates all the functionality you’d expect in a computer: CPU, GPU, memory controller, cache, peripheral interfaces like USB and disk I/O, PCI Express, and more. Some SoCs even include onboard memory. Your cell phone has an SoC, your HDTV has one, and SoCs live in your car. They’re unseen, omnipresent, and handle virtually all the computing chores needed for daily digital life.

ARM is one of the largest developers of embedded CPU cores; the company also designs embedded graphics processors and complete SoCs. It does not, however, sell chips. Instead, it licenses the CPU designs to other companies, who are free to modify and add on to the ARM intellectual property to target specific applications.

Strengths and Limitations


ARM-based processors deliver complete system functionality in tiny die packages—ranging from 20mm² to 60mm². Contrast that with the latest Atom N450, which is 66mm². Better yet, compare that to the 195mm² combined die size for Intel’s 32nm Clarkdale plus graphics, and you get an idea of just how small these SoCs can be.

The thermal and power envelopes for these processors are extremely tight. Battery-life requirements for cell phones are stringent, so you typically can’t run them at very high frequencies, even if the chip is capable of high clock speeds. However, ARM and its chip design partners have built in aggressive power management. Various parts of the chip can go “dark” when not needed, and clock gating is used wherever necessary. By the same token, individual cores can ramp up to high frequencies when needed, then drop down quickly.


So what you find in an ARM-based SoC is a relatively small, highly integrated part that consumes little power, but delivers the performance when needed. Even the GPUs are designed for small platforms, using tile-based rendering, for example, to minimize memory use and bandwidth costs. The GPU can deliver 30–60fps in a game if it’s running at just 320x240 or a similarly low resolution.

When a company implements an ARM core into its chip-level product, it actually gets the IP in the form of a macro cell that can be dropped into the electronic design tools used by the OEM company. The designers can mod the CPU or not, depending on what they want to accomplish.

Case Study: TI OMAP


TI’s OMAP 3430 is built around an ARM Cortex A8 core, and powers several cell phones, including the highly regarded Palm Pre.

While the OMAP 3430 is an SoC, it still needs a ton of interfaces to the outside world. Unlike some competing products, this generation of OMAP doesn’t integrate GPS or Wi-Fi capability into the main die. However, it does build in an Imagination Technologies PowerVR SGX GPU core for integrated 2D/3D graphics. Also integrated is the IVA 2+ multimedia accelerator, which supports resolutions up to 720p, and can encode or decode HD MPEG-4, H.264, and WMV9 at 720x480.

This can easily drive the Palm Pre’s 320x480 LCD display. The Cortex A8 core drives the Pre’s ability to multitask using Palm’s WebOS, something lacking in competing smartphones—especially Apple’s iPhone.

One of the coolest things about the OMAP is the hardware encryption/decryption engine, supporting AES, DES, PKA, SHA-1, and other encryption algorithms. This allows for secure communication in a wireless-enabled environment.

Case Study: Nvidia Tegra APX 2600


The Tegra APX 2600 is used by Microsoft’s Zune HD portable media player. The APX 2600 actually uses two different ARM cores.

The ARM11 core is the general-purpose compute core, while the ARM7 handles additional audio and video chores. The ARM11 runs at 600MHz, with 64KB of L1 cache (32KB instruction, 32KB data.) The CPU also includes 256KB of shared L2 cache.


While the ARM7 handles some audio chores, the APX 2600 also includes separate HD video encode and decode sections on the chip, plus an Nvidia-designed GPU. The CPU has the usual sets of interfaces to flash memory, USB, HDMI, and so on.

All this fits into the tiny Zune HD package, which includes 720p HD video output through HDMI, 3D support for handheld gaming, and web-browsing capability.

ARM Everywhere


As we’ve seen from our two case studies, ARM processor cores are used in some of the coolest gadgets available. Other products use ARM technologies, such as Qualcomm’s Snapdragon CPU, found in Google’s Nexus One Android-based smartphone. Even tinier chips using lower power and less-capable ARM implementations are used for dedicated tasks in automobiles, home electronics, and simpler mobile phones.

Given the prevalence of ARM-based CPUs, as well as the depth of the software development community, we’re even starting to see PC-like ARM systems, such as the  Apple iPad and the Linux-based smartbooks on display at CES. As Intel tries harder to push into more embedded systems with the x86 Atom, it should be no surprise that ARM will try to push back. Maybe “ARM inside” will be the logo on your next laptop.

Around the web

by CPMStar (Sponsored) Free to play

Comments