- About Future
- Digital Future
- Cookies Policy
- Terms & Conditions
- Investor Relations
- Contact Future
Though the basic functionality of Wi-Fi routers has remained relatively unchanged since the olden days, new features have been added that help boost performance and allow for easier management
The band that a router operates on is key to determining how much traffic you will have to compete with. You would never want to hop on a congested freeway every day, and the same logic applies here. Currently there are two bands in use: 2.4GHz and 5GHz. Everyone and their nana is on 2.4GHz, including people nuking pizzas in the microwave, helicopter parents monitoring their baby via remote radios, and all the people surfing the Internet in your vicinity, making it a crowded band, to say the least. However, within the 2.4GHz band you still have 11 channels to choose from, which is how everyone is able to surf this band without issues (for the most part). But if everyone is using the same channel, you will see your bandwidth decrease. On the other hand, 5GHz is a no-man's-land at this time, so routers that can operate on it cost a pretty penny since it's the equivalent of using the diamond lane, and a great way to make sure your bandwidth remains unmolested.
This stands for multiple-input, multiple-output and it's the use of multiple transmitters and receivers to send/receive a Wi-Fi signal in order to improve performance, sort of like RAID for storage devices but with Wi-Fi. These devices are able to split a signal into several pieces and send it via multiple radio channels at once. This improves performance in a couple of ways. When only one signal is being sent, it has to bounce around before ending up at the receiver, and performance is degraded. When several signals are sent at the same time, however, spectral efficiency is improved as there is a greater chance of one hitting the receiver with minimal interference; it also improves performance with multiple streams of data being carried to the receiver at once.
Channel bonding is something that’s done by the router and the network adapter whereby parallel channels of data are "bonded" together much like stripes of data in a RAID. This technology is most prevalent in 802.11n networks, where channel bonding is required for a user to utilize the full amount of bandwidth available in the specification. The downside to channel bonding is that it increases the risk of interference from nearby networks, which can reduce speeds. Since each channel is 20MHz, "bonded mode" operates at 40MHz, so check your settings to see if you can enable this.
Every router adheres to a specific 802.11 standard, which governs its overall performance and features. In the old days, there was 802.11a/b, then 802.11g, then 802.11n, which is the most widespread specification in use today since it's been around for a few years and is relatively fast. Waiting in the wings is 802.11ac, which by default broadcasts on the uncongested 5GHz band, but is also backward compatible with 2.4GHz. Whereas 802.11g had a peak throughput of 300Mb/s, 802.11n has a peak of roughly 500Mb/s, and 802.11ac doubles that to an unholy 1.3Gb/s. It achieves this speed increase by supporting up to eight channels compared to 802.11n's four, and through increased channel width, using 80MHz and an optional 160MHz channel.
QoS is a common feature on today’s routers, and it lets you dictate which programs get priority when it comes to network bandwidth. You could theoretically slow down uTorrent while giving Netflix and Skype or Battlefield 3 more bandwidth. One crucial point is that the QoS setting is most important for outgoing traffic such as torrents, since incoming traffic is usually already prioritized by your ISP.
High-end 802.11n routers are able to broadcast dual networks on both 2.4GHz and 5GHz bands, though the new 802.11ac standard uses the 5GHz band by default.
System RAM, or memory, seems like such a basic thing, but there’s still much to know about it
The clock speed of RAM is usually expressed in megahertz, so DDR3/1866 runs at 1,866MHz, at a certain latency timing. The only problem is that modern CPUs pack so much cache and are so intelligent in managing data that very high-clocked RAM rarely impacts overall performance. Going from, say, DDR3/1600 to DDR3/1866 isn’t going to net you very much at all. Only certain bandwidth-intensive applications such as video encoding can benefit from higher-clocked RAM. The sweet spot for most users is 1,600 or 1,866. The exception to this is with integrated graphics. If the box will be running integrated graphics, reach for the highest-clocked RAM the board will support and you will see a direct benefit in most games.
Modern CPUs support everything from single-channel to quad-channel RAM. There isn’t really a difference between a dual-channel kit and a quad-channel kit except that the vendor has done the work to match them up. You can run, for example, two dual-channel kits just fine. The only time you may want a factory-matched kit is if you are running the maximum amount of RAM or at a very high clock speed.
Voltage isn’t a prominent marketing spec for RAM but it’s worth paying attention to, as many newer CPUs with integrated memory controllers need lower-voltage RAM to operate at high frequency. Older DDR3, which may have been rated to run at high frequencies, could need higher voltage than newer CPUs are capable of supporting.
Heat is bad for RAM, but we’ve never been able to get any vendor to tell us at what temperature failures are induced. Unless you’re into extreme overclocking, if you have good airflow in your case, you’re generally good. We’ve come to feel that heatspeaders, for the most part, are like hubcaps. They may not do much, but who the hell wants to drive a car with all four hubcaps missing?
It’s pretty easy to understand capacity on RAM—16GB is more than 8GB and 4GB is more than 2GB. With unbuffered, nonregistered RAM, the highest capacity you can get to run with a consumer CPU are 8GB modules. Registered DIMMs, or buffered DIMMs, usually refers to extra chips, or “buffers,” on the module to help take some of the electrical load off the memory controller. It’s useful when running servers or workstations that pack in a buttload of RAM. ECC RAM refers to error-correcting control and adds an additional RAM chip to correct multi-bit errors that can’t be tolerated in certain high-precision workloads. If this sounds like something you want, make sure your CPU supports it. Intel usually disables ECC on its consumer CPUs, even those based on the commercial ones. AMD, on the other hand, doesn’t. For most, ECC support is a bit overkill, though.
We’re not sure what RAM heatsinks do today except look cool.
The power supply doesn’t get all the attention of, say, the CPU or the video card, but disrespect the PSU at your own peril
The actual wattage of the PSU is the spec everyone pays attention to. That’s because 650 watts is 650 watts, right? Well, not always. One maker’s 650 watts might actually be more like 580 watts or lower at the actual temperature inside your case on a hot day. Despite all this, the wattage rating is still one of the more reliable specs you can use to judge a PSU. How much you need can only be answered by the rig you’re running. We will say that recent GPU improvements have caused us to back away from our must-have-1,000W-PSU mantra. These days, believe it or not, a hefty system can run on 750 watts or lower with a good-quality PSU.
After wattage, efficiency is the next checkmark feature. PSU efficiency is basically how well the unit converts the power from AC to DC. The lower the efficiency, the more power is wasted. The lowest efficiency rating is 80 Plus, which means 80 percent of the power at a load of 20 percent, 50 percent, or 100 percent is converted. From there it goes to Bronze, Silver, Gold, and Platinum, with the higher ratings indicating higher efficiency. Higher is better, but you do get diminishing returns on your investment as you approach the higher tiers. An 80 Plus Silver PSU hits 88 percent efficiency with a 50 percent load. An 80 Plus Platinum hits 92 percent. (Efficiencies for the higher tiers vary at different loads.) Is it worth paying 40 percent more for that? That’s up to you.
A single-rail PSU spits out all the power from a single “rail,” so all of the 12 volt power is combined into one source. A multi-rail splits it into different rails. Which is better? On a modern PSU, it doesn’t matter much. Much of the problems from multi-rail PSUs were in the early days of SLI and Pentium 4 processors. PSU designs that favored CPUs, combined with the siloing of power among rails, proved incapable of properly feeding a multi-GPU setup. Single-rail designs had no such issues. These days, multi-rail PSUs are designed with today’s configs in mind, so multi-GPUs are no longer a problem.
A “dumb” power supply is actually what 99 percent of us have: a PSU that supplies clean, reliable power. An “intelligent” PSU does the same but communicates telemetry to the OS via USB. Some smart PSUs even let you adjust the voltages on the rails in the operating system (something you’d have to do manually on high-end units) and let you control the fan temperature intelligently, too. Do you need a smart PSU? To be frank, no. But for those who like seeing how efficient the PSU is or what the 5-volt rail is, it’s pretty damned cool.
Modular PSUs are the rage and give you great flexibility by letting you swap in shorter cables, or cables of a different color, or to remove unused cables. The downside is that most high-end machines use all of the cables, so that last point in particular is moot—what’s more, we think it’s too easy to lose modular cables, which sucks.
Modular power supplies are the rage today—just don’t misplace the cables.
Click the next page to read more about PC hardware buying tips.