In our last white paper roundup , we explained the technology behind three modern connectors. And while stuff like USB 3.0 and Light Peak is pretty exciting, we can't help but feel like technologies that speed up physical connections are a little behind the times. After all, isn't the future supposed to be wireless?
In that spirit, our new batch of whitepapers explores the wild world of wireless technologies, including 4G, Near Field Communication, and 802.11ac Wi-Fi. So keep reading, and educate yourself about this generation's wireless tech.
The statement “4G mobile technology has evolved beyond 3G” might score high on the “Duh” meter; but when we asked how this evolution manifested itself, we got different answers from different folks.
Mobile companies maintain that they’re rolling out 4G networks and handsets today, but the ITU-R (International Telecommunications Union Radiocommunication Sector) disagrees. That standards body maintains that the gear currently being advertised as 4G falls well short of its ideal, and that true 4G networks and devices lie a few years into the future. We’ll explain where 4G is today, how the networks have improved since 3G, and where the ITU-R wants the industry to go.
Devices and services being marketed as 4G today are actually 3GPP LTE (Third-Generation Partnership Project Long-Term Evolution) and Mobile WiMAX (IEEE 802.16e). Both technologies represent major overhauls to prior networks, so they’re more advanced than 3G, but they’re not quite 4G. Sprint has chosen WiMAX, and Verizon and AT&T are moving ahead with LTE.
While each company evangelizes its own decision, the differences are nuanced. “In many ways, WiMAX and LTE are pretty comparable,” says Verizon’s Executive Director, Ecosystem Development Brian Higgins. “Both are OFDMA-based technologies, so they’re quite similar.”
OFDMA (Orthogonal Frequency-Division Multiple Access) changes the way the wireless spectrum is divvied up. CDMA (Code Division Multiple Access), which Verizon uses today, assigns each transmitter a code in order to multiplex the signals from many users over the same physical channel. OFDMA uses two-dimensional resource scheduling (in time and frequency) to dedicate an overlapping but non-interfering frequency range to each user so that multiple users can be supported in the same time slot. Verizon’s LTE service will divide the 700MHz frequency spectrum it purchased during the 2008 FCC auction into 10MHz channels.
“Within milliseconds,” says Higgins, “we’re making decisions on what chunk of frequency and what chunk of time we are going to allocate, and how many of those chunks, down to each individual user.” This allows the network to shift more bandwidth to more demanding requests in real time.
Both next-gen mobile technologies—3GPP LTE and Mobile WiMAX—rely on Orthogonal Frequency-Division Multiple Access (OFDMA) to make the most efficient use of the available wireless spectrum. OFDMA allocates time and frequency range to each user on a schedule, so that multiple users can be supported in the same slice of time.
Verizon augments OFDMA with MIMO (Multiple-Input, Multiple-Output) antenna technology at the downlink end. MIMO takes wireless communications’ greatest weakness—multipath signal propagation—and turns it into an advantage. Radio signals propagate as they bounce off buildings, mountains, and other physical obstacles. Instead of rejecting the multiple signals, MIMO antennas accept all of them and combine them into a single coherent data stream.
Higgins uses sound waves to illustrate how MIMO works. Imagine listening to someone speak at the opposite end of a furnished room. That person represents a single radio transmitter. As the sound waves emanate from his mouth, some bounce off the walls, windows, and furniture, while upholstery, curtains, and other materials absorb others. Your ears represent a radio receiver. “If you are someone who has just one ear,” says Higgins, “you’re going to have an ability to hear a conversation to a certain level. But if you have two ears—which is what you’re thinking about with MIMO—you have the ability to pick up different paths of sound coming across.”
Packet-switched networks mark another major improvement in LTE and WiMAX. In earlier networks, such as CDMA, the phone transmits to a base station, and that traffic is then sent through a T1 circuit to a mobile switching station. The data can then be converted to IP (Internet Protocol) packets, if needed. LTE and WiMAX networks can process all traffic using IPv6 and avoid the conversion step. Verizon, however, will continue using CDMA for voice traffic for now, reserving LTE for data traffic.
All of these changes combine to reduce latency in LTE and WiMAX networks: OFDMA allocates bandwidth more efficiently, MIMO improves signal quality, and packet-switched networks reduce conversion overhead. Verizon claims its LTE latency is in the 30-50ms range, compared to nearly 500ms on some 3G networks. This should render Verizon’s network sufficiently responsive for online gaming, VoIP, and other demanding applications.
LTE can reach speeds of 100Mb/s downlink and 50Mb/s uplink, while WiMAX delivers up to 128Mb/s down and 56Mb/s up. These speeds hurtle past 3G standards, but they’re still far off the 4G purists’ target, which has led some to dub LTE and WiMAX “3.9G” technologies.
The ITU-R guideline for true 4G is known as IMT (International Mobile Telecommunications) Advanced. According to that standards body, wireless technologies must run at 1Gb/s for stationary users and 100Mb/s for moving connections. The IMT Advanced guideline also calls for significantly reduced latency: 10ms.
The ITU-R hasn’t identified a specification that meets its goals for IMT Advanced; instead, the next generations of LTE and WiMAX—LTE Advanced and IEEE 802.16e—are being designed to achieve those speeds.
While the ITU-R definition of 4G will push the industry toward even faster performance, carriers defend their “4G” designators. “We’re talking about doing basically an order of magnitude change in the capabilities of our wireless technology,” says Verizon’s Higgins. “To us, that’s a meaningful difference and is worthy of creating the right kind of brand around that, which is ‘4G.’ ”
You’re house-hunting and walk up to a home with a “For Sale” sign. You take out your NFC-enabled cell phone or tablet, point it at the sign, and without any further input quickly receive the property’s lot size, square footage, layout, and asking price, as well as a deferred link to an online 3D showcase of the property and its salient features.
Near Field Communication is a short-range, high-frequency wireless communication technology that is quickly working its way into mobile devices across the planet. Apple just announced that the iPhone 5 and iPad 2 will both support the emerging wireless protocol. Nokia, HTC, and LG have also announced plans to support NFC.
We’ll explore how NFC’s underlying technology works, evaluate its prospects for future deployment, and assess the potential security risks involved.
Radio-frequency identification (RFID) is the most frequent point of comparison for Near Field Communication. Both provide information from a tag to a reading device, but RFID is an identifier technology, while NFC is an instigator. RFID tags essentially say, “This is what I am,” and present specific information. NFC says, “Here’s some data, and here’s where to go for more.” Or, in the case of NFC payment transactions, “I have your data, now I’ll go process the transaction for you.”
Both RFID and NFC operate in the unlicensed 13.56MHz frequency band. Both use electromagnetic field induction as a means of communication. Think about how electrical wires work. When you send a current through a wire, it generates a magnetic field around the wire. When you bring another wire into proximity of the first, that magnetic field acts on it and engenders the second wire with the same electrical characteristics as the first.
If you use a loop of wire instead of a single strand, the increased surface area makes the magnetic field even stronger. Using an equal number of loops in the two adjacent wires preserves the transferred electrical characteristics, while using a different number of loops on each side causes a change in the transferred electrical characteristics roughly in ratio to the loop count between the transmitted side and the receiving side.
This works well for digital data transmission because it is grouped as a series of 101010…~ data packets where, typically, a “1” is “on,” or the presence of voltage, and a “0” is “off,” or the lack of voltage (with “voltage” often called “signal” in terms of its use as a communications source).
Current thinking is that the NFC “tag” should power the NFC “reader/writer”—typically, a cell phone or tablet, where the device’s own power supply is necessary primarily for running the device. This is the reverse of RFID.
In our example above, when your smartphone (the reader/writer) enters the
proximity of the “For Sale” sign (the tag), the NFC chip embedded in the sign enters active mode and immediately begins sharing power and data with your phone through its loop antenna. When the magnetic field created by this data transmission enters the near field—about one wavelength from the sign—it excites the loop antenna on your smartphone and induces the data onto it.
In a simple transfer of data from the tag to the reader, this could be as simple as, “Are you a compatible device?” “Are you ready to receive information?” and, “Here it is.” The incoming data could also include references to websites and other small data sets. If the connection is made to initiate a payment for goods or services, the tag can add a wait period for a response from the credit organization and then a confirmation that the payment was either accepted or declined, and a transaction number and date.
The beauty of NFC is that there’s no need for anyone to activate a pairing. It automatically happens whenever the appropriate devices are within proximity of each other. No activity can proceed, however, unless the reader confirms that the communication is wanted.
The downside is that the data transfer rates are slow. Currently, the range of rates is limited to 106Kb/s, 212Kb/s, and 424Kb/s, which makes 802.11b’s 10Mb/s feel blazingly fast. For now, this is more than adequate for the type and quantity of information that is typically passed between NFC devices.
Because the distance required to connect two devices is so small and because NFC is still developing, security concerns are minimal—for now. Possible threat vectors include: faux tags that can snag data from your smartphone, or reader devices that can steal data from the tags on your credit cards or key ring.
Device owners would have to approve a conversation between their reading device and the faux tag for the first breach to happen. The latter example is far more likely to occur—someone would have to breach your personal space while holding an NFC reader in hand to swipe your data. For public transit commuters in urban environments, this is an everyday experience.
Regardless, the possibilities and interest in NFC is great enough that we’ll see widespread adoption within two years. And it has the very real potential to dent the revenue of commerce transaction companies like VISA and MasterCard. In a world where you pay for everything by smartphone, who needs Visa to handle the transactions?
To defend its turf, Visa has been conducting an NFC trial program for the last six months in the Spanish resort town of Stiges to investigate the viability of making VISA payments using NFC-based smartphones. By all accounts, both the company and the 1,500 trial users liked the result. MasterCard has performed similar tests with similar results.
Not much has happened to good old Wi-Fi since 802.11n arrived on the scene about six years ago, but a new protocol that the 802.11 WG (Working Group) is currently stirring up might turn out to be much bigger and way faster than 802.11n. It’s called 802.11ac, and it promises a whopping 1Gb/s throughput by improving modulation and extending 802.11n’s MIMO scheme to extreme levels. The only real bad news is that we may have to wait a while to experience it. We’ll explore the specifications of this budding standard and its potential availability below.
Where 802.11n offered a dual-band solution (2.4GHz and 5GHz), 802.11ac operates solely in the 5GHz (VHT, or very high throughput) band. This is still considered a cleaner spectrum than 2.4GHz, despite its use in 802.11n, because few 802.11n access points actually use much of the higher band.
The basic specifications for 802.11ac, as currently defined, are as follows:
Wider channel bandwidths: 80MHz and 160MHz channel bandwidths (vs. 40MHz maximum in 802.11n). The 80MHz channel is mandatory for stations (STAs); 160MHz is optional.
More MIMO spatial streams: Support for up to eight spatial streams (vs. four in 802.11n).
Multiuser MIMO: Multiple stations (STAs, typically handheld or mobile devices), each with one or more antennas, can transmit or receive independent data streams simultaneously. Downlink MU-MIMO (a single transmitting device with multiple receiving devices) is an optional mode within the specification. The upside of these multistation enhancements is that routers or host computers will be theoretically capable of streaming HD video to multiple clients throughout a networked environment.
Space Division Multiple Access (SDMA): Streams of data are resolved spatially as opposed to by frequency. This is similar to 802.11n’s MIMO approach and boosts throughput while also ensuring signal strength and fidelity.
Modulation: 256-QAM (quadrature amplitude modulation), rate 3/4 and 5/6 is used to carry data, as opposed to 64-QAM, rate 5/6 in 802.11n. The result should be considerably improved throughput. (This is not the same as the digital television QAM standard.)
The chart above describes a series of possible 802.11ac usage scenarios based on device and network configurations.
Other features include improved beamforming, which will enable the multiple signal emissions to work together, and MAC modifications to support the multiclient changes noted above. The standard as currently specified is also backward compatible for 20/40/80/160MHz channels as well as 802.11a/b/n devices.
It’s worth noting that while 802.11ac’s goal is to produce transfer rates as high as 1Gb/s, rates will vary depending on the exact scenario. We’ll insert our usual caveat here: Real-life transfer rates are always lower than theoretical throughput rates—sometimes embarrassingly so. 802.11ac will be faster than 802.11n, but probably not as fast as the throughput rates claim. For example, 802.11ac will probably operate in the 350Mb/s range, not 1Gb/s—which is still a huge step up from 802.11n’s 160Mb/s (or so).
This said, given the use of multiple signals, it’s theoretically possible that 802.11ac might even be able to exceed the maximum given exaggerated MU-MIMO conditions. At the very least, this architecture will permit much faster file synchronization and backup, and may even permit direct transmission of wireless video signals.
As if we haven’t had enough of competing standards over the years, the 802.11 Working Group is also working on an 802.11ad specification that operates in the 60GHz bandwidth spectrum. Fortunately, it and 802.11ac are not competitive. They can, in fact, be used in complementary situations. For example, using both 5GHz and 60GHz interfaces, it’s possible to carry typical network data on the 802.11ac portion throughout the house while using the 802.11ad specification for streaming media within rooms. Assumptions, at this point, indicate that 802.11ad and its potential 6Gb/s transfer rate should be able to handle as many as three HD videos simultaneously.
The semi-bad news is that 802.11ad parallels WiGig’s goals. And while 802.11ad is still to come, WiGig already enjoys support from Atheros, Broadcom, and Intel. Despite the considerable stature of these three companies, this is only semi-threatening to 802.11ad because support and alliances are routinely abandoned and/or assimilated with frightening regularity for a variety of reasons.
As always, backward compatibility is a mixed bag. Its presence is understandable, but insisting on it often ensures that weaknesses built into prior technology limits performance. With 802.11n equipment already in use, it would be interesting to see the spec architects draw a line in the ether and offer a fresh starting point for a new class of WLAN. This is not likely.
Assuming that the ISPs don’t start throttling bandwidth—a valid concern given the recent data limit edicts by AT&T—the implications of real-world data transfer rates of 350Mb/s are potentially revolutionary, particularly when used in tandem with 802.11ad devices. Video transmission, networked virtualization, remote control, and basic large-file transfers all suddenly become much more practical.
So when will we get our hands on 802.11ac tech? The sad answer is not anytime soon. The standard will likely be finalized in late 2012. Assuming this is the case, Working Group approval probably won’t come until a year or so later in late 2013, which means we probably won’t see the release of officially sanctioned 802.11ac consumer devices until then.
But, just like with 802.11n devices, it is likely that we’ll be faced with confusing standards before the final 802.11ac spec is approved. Remember “draft-n” and its variants? We’ll probably face the same coin toss with the same probability of buying noncompatible gear. Our take is that it’s a small price to pay for doubling our wireless transfer rates.