The World Wide Web has been around for 20 years as of today and broadband internet has evolved considerably over the latter half of that timespan in the US. Whereas just a few years ago, large parts of the country were relegated to pokey 56K dial-up connections over standard phone lines, now multi-megabit broadband connections are commonplace and speed increases are being introduced regularly. In fact, in some test markets, broadband at gigabit speeds is on the way. And yes, that’s gigabits with a “G,” as in roughly 17,800x more bandwidth than 56K dial-up.
Looking at the future of broadband Internet
We also have many more choices today. Connecting to the Internet used to mean firing up AOL for millions of users. Now, though, most consumers can choose between multiple service providers, which offer cable, DSL, or even wireless broadband connections with plenty of bandwidth for all but the most demanding users. Broadband may not be universally available here in the states just yet, but availability is far better than it was, and it’s consistently improving.
Despite myriad advances made to the county’s broadband infrastructure, the story is not all good. According to a few recent studies, the United States still trails some other nations in multiple broadband-related categories, including average connection speed and penetration. For example, South Korea’s average connection speed is more than double that of the United States—16.7Mbps vs. 6.1Mbps—and the United States ranks 36th in overall connectivity.
There’s more to broadband than just bandwidth and penetration, however, and we hope to fill you in on the details here. Our goal is to help you to better understand the various technologies available now and outline some of the advances coming in the future. We’ve also got some practical tips for changing ISPs and optimizing your current broadband connection on tap, as well.
Get connected over copper, fiber, wireless, or satellite
There are a number of different ways consumers in the United States have access to high-speed broadband Internet connections. Some, like DSL, leverage existing telephone network infrastructures, while others, like satellite or LTE wireless, use relatively new technologies. Although broadband isn’t accessible to everyone in the country, there are multiple options available for most consumers and the choices that are available continue to mature and evolve.
The most common broadband connection types in the United States include digital subscriber line (or DSL), cable, fiber optic– to-home solutions, wireless, and to a lesser extent satellite. Wireline solutions like cable and fiber-to-home will typically offer the highest-bandwidth, lowest-latency connections, and DSL is usually the most affordable, but all of the connection types mentioned here have multi-megabit plans available from numerous Internet service providers (ISPs) in many parts of the country. Before we dig in, also note that all of the broadband connection technologies we discuss here are sometimes referred to as “last mile” or “network edge” connections. What that means is that they’re the connection types used by Internet service providers to make the link between end users and the core backbones of the Internet.
DSL modems like the D-Link DSL-520B connect through standard copper phone lines to provide broadband Internet access.
According to the most recent data available on the National Broadband Map, DSL is the second most accessible broadband technology in the United States, behind only the various wireless technologies. In the locations where high-speed broadband is available, one form of DSL or another is offered to 88.9 percent of those customers.
Although “DSL” is a term thrown around freely, it actually encompasses an entire family of technologies, which includes asymmetric digital subscriber line (ADSL), symmetric digital subscriber line (SDSL), integrated services digital network (ISDN), rate-adaptive digital subscriber line (RADSL), and high bit-rate digital subscriber line (HDSL), among a few others. DSL leverages the copper cabling used throughout the telephone network to transmit digital data, and as such, the bandwidth offered by the various technologies will vary based on a few factors, like the quality of the physical connection and distance from the exchange, sometimes called the “central office.”
DSL is typically more affordable than other solutions because it’s cheaper to implement over the existing telephone network, versus deploying new, high-bandwidth fiber cables over the same expanse. Though sometimes cheaper, many DSL solutions can still offer significant bandwidth to end users. Sonic.net, for example, is one of the best-regarded DSL providers in the nation, with plans that offer download speeds of up to 20Mbps. It’s able to offer DSL speeds so far above the national average of about 4Mbps by using VDSL2 bonding technology that essentially links dual copper pairs into single connections. Other DSL providers also leverage bonding technology to increase the effective amount of available bandwidth to end users, but the fastest ISPs are typically concentrated in the more densely populated areas of the country, like California and the Northeast.
A typical DSL setup in a home consists of little more than a filter (or filters) that are used to separate voice and data signals between telephones and a DSL modem. The technology hasn’t changed much in recent years, so massive speed increases haven’t been offered by many DSL providers, but the technology is mature and reliable, and should suit the needs of mainstream consumers. In the future, however, large bandwidth gains are still possible with DSL. Alcatel-Lucent, for example, announced that through a technology advanced by Bell Labs, it has achieved 300Mbps over two DSL lines (through bonding) at a distance of 400 meters. The technology works by leverage bonding, something called Phantom mode, and vectoring. Phantom mode creates a third, virtual pair on top of the existing two pairs used in the DSL lines. And then vectoring technology filters out interference and crosstalk among them all. The bandwidth of the two physical and the virtual pairs are then combined into a single, ultra-high-bandwidth pipe.
The Data Over Cable Service Interface Specification, or DOCSIS, is used by many cable television operators to provide broadband Internet access over their existing network using a cable modem, like the Motorola SB6120 pictured here.
On some level, cable Internet access is similar to DSL. However, instead of using the telephone network, cable Internet leverages the cable television infrastructure to provide a broadband Internet connection. Also like DSL, cable Internet is relatively pervasive and is the next most common wireline broadband connection technology in the United States. In areas where broadband is available, cable Internet access is an option for 85.2 percent of consumers.
Many of the technologies employed by cable Internet access providers are determined by the Data Over Cable Service Interface Specification, or DOCSIS. DOCSIS was initially developed by CableLabs, a not-for-profit research and development consortium founded by a number of cable television providers, along with a host of additional contributors, including the likes of Broadcom, Cisco, Conexant, Intel, Motorola, Netgear, Texas Instruments, and a handful of other companies.
Cable Internet is also one of the more mature broadband technologies offered in the United States and bandwidth available to end users is relatively high. If we disregard some fledgling fiber-to-home solutions, cable Internet is among the fastest in the nation. It is not uncommon for cable service providers to offer premium plans in the 50Mbps to 100Mbps (download) range, at prices below $100 month. It is also common to see cable Internet included in “triple play”–type packages that bundle Internet, television, and phone services on a single bill.
Although fast and relatively affordable, one of the disadvantages of cable Internet is that bandwidth is shared not only on the provider’s core network, but among smaller nodes, or groups of residents, as well, which can lead to slowdowns during peak usage times. If there aren’t numerous users concurrently consuming large amounts of bandwidth, the slowdowns may be imperceptible, but on more congested networks the slowdowns can be significant.
Though already fairly mature, bandwidth gains are still likely as providers improve their networks and implement more features of the DOCSIS 3.0 specification. For example, DOCSIS 3.0 allows for bonding of multiple upstream and downsteam channels to increase total available bandwidth. The specification calls for hardware to support a minimum of four upstream/downstream channels, which can each offer a maximum of 42.88Mbps, but there is no maximum number of channels defined. An eight-channel bonded configuration could theoretically offer a connection speed of up to 343Mbps.
Click the next page to read more about Fiber internet speeds.
Fiber-optic cables can carry more data, over much longer distances, than copper wire.
Some of the more recent, ultra-high-speed broadband services being offered to US consumers consist of newer fiber-to-the-home deployments. Although fiber-to-the-home, or FTTH, is the most common term end users are likely to hear, there are numerous types of fiber deployments currently in flight across the country. Fiber-to-the-neighborhood (FTTN), fiber-to-the-building (FTTB), fiber-to-the-premises (FTTP), and fiber-to-the-desk (FTTD) are all terms you may hear bandied about. They’re all fairly self-explanatory; the significance of each deployment type is the peak performance that can be achieved by each architecture.
To put it simply, the closer the optical fiber cable is brought to end users, the faster the broadband connection can be. Whereas DSL providers typically offer 10Mbps–20Mbps and cable providers up to 100Mbps or so, fiber-to-home providers can offer hundreds of megabits or even full gigabit connections. Verizon’s FiOS service, for example, offers a 300Mbps plan in some parts of the country. Google Fiber, which is currently being built out in Kansas City, will offer speeds up to 1Gbps, and Sonic.net offers fiber services in parts of California where users can choose up to 1Gbps services, as well.
Prices for these exotic broadband services vary significantly from more than $200 a month for FiOS’s 300Mbps plan, to only $69 a month for Sonic.net’s offering. These services, however, are available to only a small fraction of Americans at this time, so competition among the various providers is essentially nonexistent. When asked about current fiber-to-home offerings, Sonic.net CEO Dane Jasper said, “None of these competitive efforts have any substantial national market share at this time, and I don't believe they have much influence on the incumbents except in very small regional pockets.” He also said, “Telcos will push fiber closer to the home (or, in the case of Verizon, all of the way),” however, which means some very good things are on the horizon. Because fiber-optic cables offer much more bandwidth than copper wire, over longer distances, it is the most future-proof of the broadband technologies we mention here. Rest assured, it will continually be brought closer and closer to end users, and more bandwidth will be available as a result.
Unfortunately, as of the most recent data available on the National Broadband Map, direct fiber Internet services are only available to 17.8 percent of potential broadband subscribers in the United States. For fiber Internet service to have a more meaningful impact on the broadband market, it’s going to have to reach a much larger audience. That should happen in time, though.
A few years ago, LTE wireless networks didn’t exist. Now they cover more than 75 percent of the nation.
Wireless broadband encompasses a handful of technologies, including Wi-Fi, WiMax, and the various cellular networks, among a few others. By far, the most pervasive of these technologies as a service is the cellular network, which thanks to recent LTE build-outs, can offer relatively high peak bandwidth under certain conditions, at affordable rates.
We’re all familiar with Wi-Fi, which is designed to cover relatively small areas and not really sold as a service, except for temporary hot-spot applications. WiMAX (Worldwide Interoperability for Microwave Access) is a longer-range technology designed to deliver last-mile wireless broadband access to end users at multi-megabit speeds, as an alternative to wireline technologies like DSL or cable. WiMAX is available from providers in about 80 US markets, and a large number of additional markets around the world, but it isn’t very popular as a residential solution. The 3G and 4G cellular networks, however, account for a huge portion of Internet traffic, mostly due to the popularity of smartphones and other mobile devices. 4G LTE networks in particular have been rapidly expanding in recent years and offer relatively high bandwidth. In real-world situations, in markets like New York, San Francisco, and Austin, Texas, 4G LTE broadband can offer upwards of 35Mbps down and 15Mbps up, with much higher theoretical numbers possible.
Taken as a whole, broadband wireless Internet access is the most widely available connection type in the country. According to the National Broadband Map, wireless internet is an option for 98.7 percent of consumers living in areas where broadband connections are available.
As useful as wireless Internet can be, it has some major drawbacks. For one, it is relatively expensive. Wireless data plans typically fall in the $20–$100-a-month range and offer limited amounts of data usage. For example, Verizon Wireless offers a 4GB-per-month shared data plan for $30 and a 12GB plan for $70. Exceed those limits, and you’ll have to pay additional fees and/or contend with data throttling. Wireless Internet is also more susceptible to interference than other connection types, and network performance varies wildly depending on a number of factors, including distance from the tower and network congestion. As such, wireless services are best suited to mobile devices, as a backup to wireline solutions, or for casual users that aren’t likely to hit the imposed data limits.
What comes after 4G LTE is still up in the air. A 5G standard has yet to be finalized and the 4G build-out is still far from complete. We can reasonably expect lower latency and more bandwidth at longer ranges, but we won’t know for sure until a spec is finalized.
In areas where wired broadband is unavailable, satellite Internet may do the trick. It doesn’t offer anywhere near the bandwidth of traditional wireline solutions, however.
Satellite Internet is more of a last resort than a viable solution for most consumers in need of broadband. The technology is a godsend for people who live in rural or remote areas where wireline broadband solutions are not available, but residential satellite broadband speeds simply can’t match those of xDSL or cable and costs are usually higher, too.
Typical satellite Internet speeds hover in the 1Mbps to 2Mbps (download) range, through some of the latest technology from providers like HughesNet offer up to 15Mbps down and 2Mbps up. There are monthly bandwidth caps in the 20GB–40GB-per-month range, however, and costs for even the more entry-level plans are somewhat higher than more common wired solutions.
Advancements in satellite Internet will come as compression and bandwidth-sharing technologies are improved, but the most significant gains can only come as newer, more advanced satellite, with higher total capacities, are put into orbit.
Click the next page to read about the broadband of the future.
To get a read on broadband’s future in the United States, we talked to a couple folks well versed on the subject: Patrick Moorhead, founder and principal analyst at Moor Insight and Strategy, and Dane Jasper, CEO of Sonic.net. When asked about which of the broadband technologies available in the United States will be the most pervasive moving forward, Moorhead said, “Wireless broadband will be the most pervasive in the future, given that it touches so many people in so many places. Wi-Fi wireless in particular will be expanded significantly as service providers attempt to string networks together to take some of the traffic off of congested 3G, 4G, and LTE networks.” He continued, “Cable is the winner in terms of the price-to-speed equation, in that most of the investment is a sunk cost. Fiber, as in Google Fiber, is the fastest, but also the costliest to install. Satellite will continue to play a niche role, serving hard-to-reach and rural areas. Its asymmetry and line-of-site requirements outweigh any kind of downlink speed advantage.” Dane Jasper mostly agreed, stating, “Domestically, you will see a continued slow march of the incumbent duopoly; cable will gradually upgrade to higher DOCSIS versions as they become available and feasible, and will split nodes in the meanwhile to avoid congestion—at least to the point of avoiding customer churn. Meanwhile, telcos will push fiber closer to the home—or, in the case of Verizon, all of the way—while rolling out faster xDSL technologies: ADSL2+ and VDSL2 today, with bonding and then vectoring.” Jasper added, “Wireless is also a factor to consider. With LTE's very-high-speed capabilities, and consumers’ interest in tablets and other portable devices, these services are a potential alternative to wireline products.”
We also asked what they thought pervasive, ultra-high-speed broadband could mean for consumers, and Moorhead proclaimed, “New usage models will emerge with the advent of fast, reliable broadband. With faster broadband, most of our computing can be done in the cloud, meaning more consistent, reliable, and less expensive experiences. Low-priced displays able to run any app will be all over the house, so literally, every room will enable access to every app and piece of content, anytime.” Sounds good to us, though we don’t want to downplay the need for fast local storage, as well.
As for why the United States tends to lag behind many other developed nations and what we could do to improve the situation, Dane Jasper put most of the blame on misguided government policies and regulation. He said, “Reversing the course selection of a multi-modal competitive model, which the Republican FCC charted for us in the early 2000s, is the quickest way to resolve the domestic broadband issue. Europe and Asia followed our regulatory course from the 1996 Telecom Act, and stuck with it—while in the United States we faltered, fostering instead a duopoly. While incumbent cable and telcos have made substantial upgrades—DOCSIS 3.0, FiOS, U-verse—we continue to over-pay for under-delivery of speed, generally with consumption caps.” Patrick Moorhead’s view was somewhat different. Moorhead said, “Countries leapfrog each other as it relates to broadband. The United States was viewed as the mobile laggard during the EDGE days, but now has one of the top spots in LTE. Countries like Korea and Japan will continue to dominate with speeds, unless the US government would subsidize fiber rollout. Given the US budget challenges, I don’t see that happening, meaning the United States does not gain leadership footing in broadband.”
What is super-high-speed Internet good for, anyway?
Game consoles like the Xbox 360, smart TVs and appliances, Internet radios, and all of the other connected devices in your home, consume bandwidth. Rumor has it that the next Xbox will require an always-on Internet connection.
Netflix recommends 5Mb/s broadband connections for the highest quality movie streams, which can use approximate 2.3GB and hour
In many circumstances, the benefits of an ultra-fast broadband connection may not be immediately apparent. There are other factors besides peak bandwidth that ultimately affect a user’s experience online and if you’re not using the bandwidth you already have available, upgrading to a faster plan isn’t going to make much difference. However, as our needs for more bandwidth increase, the benefits of some of the more advanced broadband technologies become clear.
As we start saving more data in the cloud, streaming more HD content, and increasing the number of connected devices in our homes, our bandwidth needs grow. Just a few years ago, having one or two PCs connected in a home was typical. Today, though, it’s not uncommon to find a dozen or more connected devices, when you account for smart appliances and televisions, mobile devices, game consoles, desktop systems, and laptops.
How much bandwidth you’ll require will obviously vary based on the usage habits of those in your household, but we can give you some rough guidelines and expectations. For example, let’s say you’ve got three users in your home. One is playing a game online, while the other two are streaming HD movies or television from a service like Netflix. For their highest-quality streams, Netflix recommends a 5Mbps connection; a typical stream can consume about 2.3GB an hour. The gamer will use a minimal amount of bandwidth, but the two users streaming video will likely saturate a 10Mbps connection.
The speed differences between mainstream and high-end broadband plans are not trivial and neither is the cost. Actual differences will vary from provider to provider, but we’ll use Verizon FiOS as an example. A basic plan that offers 15Mbps down and 5Mbps up will run about $70 a month. Its flagship plan offers 300Mbps down and 65Mbps up, 20x and 13x increases in bandwidth, respectively, for $209 a month. If you can use that kind of bandwidth, the cost per megabit is much better with the high-end plan. To give an example of how those bandwidth ratings affect download speed, the 15Mbps plan can download a 5GB file in about 44 minutes. The 300Mbps plan can do it in 2.2 minutes.
In an attempt to curb massive bandwidth consumption, some providers—especially wireless providers—have implemented bandwidth caps that kick in when consumption ticks past a certain level. For wireless providers, that number is usually in the 2GB–4GB-per-month range, while wireline providers like Comcast are in the 300GB-per-month range.
Some would argue that these caps are simply a tool to gouge consumers, while others claim it’s a means to ease network congestion. Sonic.net CEO Dane Jasper said this when asked about bandwidth caps, “I don't see caps as being related to network capacity concerns. To put it simply, the heaviest users, when capped, will still use their service during peak/prime time, and network capacity must be built to accommodate the peak load. The sustained use that the heavy users would make is spread around the clock, and doesn't have any substantial impact on capacity planning.” Whatever the case, if bandwidth caps become the norm, consumers could be in for significant cost increases in the future as our bandwidth needs increase.
Click the next page for tips on switching ISPs.
Where is it?
As we’ve mentioned, broadband isn’t universally available across the entire United States just yet. According to a recent study by Akamai Technologies, 81 percent of the country has access to broadband with speeds greater than 2Mbps. That may not sound too bad, but with availability in only 81 percent of the nation, the United States ranks 36th among the countries included in the study. The global average is only 66 percent, which means the United States is decidedly ahead of the curve, but in countries like Germany, the Netherlands, and even Bulgaria, broadband connectivity is in the 94–96 percent range. As compared to the previous year, the United States increased its average by 8.6 percent, which puts the country among the fastest growers, but there is obviously still much work to be done if we’re going to catch the leaders.
If you ask those in the know why the United States trails many other nations in broadband availability and speed, you’ll likely hear three possible reasons: burdensome government regulations, high corporate tax rates, and the relative high cost of bandwidth in the country. Solving these problems is going to take significant action on the part of the government and some initiative and cooperation from the private sector, but it appears we are on the path to success, especially as younger, more tech-savvy legislators are elected. The FCC’s Broadband.gov website has details on the Broadband Action Agenda and lists more than 60 initiatives the FCC intends to undertake over the next few years to implement the recommendations in the National Broadband Plan, which was introduced in March 2010. One of the goals of the National Broadband Plan is to provide 100 million American households with access to 100Mbps broadband connections by 2020.
It’s easier than you think
Switching ISPs is a major concern for some users, but it need not be. Unless you’re locked into a contract with a wretched provider or are married to an email address provided by your ISP, switching to a new provider should be painless. We suppose some users may also be forced to use a particular ISP due to specific work-at-home requirements implemented by their employer, but even then a call to the company’s IT-support department should yield results.
If you’re locked into a contract, perhaps due to a triple-play-type bundle that links phone, TV, and Internet service, there are still things you can do to switch. Although most ISPs don’t make specific uptime guarantees, there is still an implied level of reliability that needs to be met. If service is subpar, start by logging every outage or problem and contacting your ISP’s support team. Run regular speed tests too, and log every result that falls below your expected performance level. At some point after reporting continued issues, it won’t be cost effective to provide support any longer. Call your ISP, ask for a service manager to hear your case, and you’ll eventually be let out of your contract.
Should your ISP-linked email address be associated with numerous logins online, start by setting up a new account with a free service like Gmail and systematically change all of your login credentials. Also, give yourself some lead time and set up an auto-forward to send emails coming into your ISP-linked account to the new account. And check in with your ISP; many will allow access to the email account via webmail, even after you’ve moved on to another provider.
When or if you do make the switch, assuming you’ve got a router in your home network, connecting the new modem to your router is usually all that is necessary. Worst-case scenario, you’ve got to reconfigure your wireless settings in a new router, and maybe a few IP addresses and forwarding rules, but that’s about it.
Even your existing broadband service can be made faster with a few simple tweaks
Signing up for a fast broadband connection is an obvious first step to ensuring high speeds while surfing the web. Even with a speedy connection in place though, there are a number of things that can be done to ensure optimal performance and reliability. The routers thrown in when you sign up for service aren’t always of the best quality, and many service providers also have wimpy DNS (Domain Name Service) servers, which are easily bogged down under load and introduce tons of latency. These things can be easily averted, however, and performance and reliability can be increased with just a few tweaks and a bit of reconfiguration.
The routers bundled with many broadband service plans tend be low-end, dumbed-down products that provide sub-par wireless coverage and are ill-equipped for numerous connections. If you’ve invested in a fast broadband connection, spend a few extra bucks on a high-quality router, as well. A good router will be outfitted with a faster processor, more RAM, and a better network switch. It will likely offer better wireless coverage, too, and provide faster, more reliable service, even if there are multiple devices attached, all sucking down gobs of data.
The Asus RT-N66U is a powerful wireless broadband router, with an integrated gigabit switch, that will outperform most of the routers bundled with residential broadband service.
For the best possible connection, your broadband modem should be located as close to the incoming feed as possible. For example, if you’ve got a cable modem, and the cable line coming into your home has been split numerous times before the modem is attached, the signal quality to the modem will be degraded. For the best performance, the cleanest signal should be fed to your modem, which means connecting it to the main line, as close to the initial split as possible.
Router positioning is also important if you have devices that connect wirelessly. If your router has omnidirectional antennas (and odds are that if you haven’t replaced the stock antennas, it does), it is best to position the router as close to the center of the area you’d like covered as possible.
The omni-directional antennas included with most routers transmit (and receive) signals in all directions. For the best performance, that signal should be centrally located between devices.
Every time you type a URL into your web browser, a request is sent to a DNS server to obtain the corresponding website’s IP address. If that server is bogged down or just plain sluggish, it can be slow to resolve addresses and introduce unwanted latency. Try running the DNS Bench utility available at www.grc.com to ascertain the fastest DNS servers in your area, and use those in lieu of your ISP’s. You can designate which DNS servers to use in your TCP/IPv4 properties in Windows on each machine, or enter them into the requisite fields in the WAN section of your broadband router’s setup utility.
Using the fastest DNS servers available in your area can significantly speed up web browsing.
Are you happy with your current Internet setup? Or are you still waiting for 4G LTE coverage and fiber optics to hit your neck of the woods? Let us know in the comments below!