There are several practices that distinguish true power users from common folk. System building is one. Component benchmarking certainly applies. As does religious parts swapping. And then, of course, there’s hardware hacking. Hacking, more than anything else, exemplifies our ongoing quest for more—more performance, more functionality, more power—because we’re wringing this extra goodness from gear we already own, using crafty methods and occasionally pushing the bounds of practicality in the process; sometimes just for the heck (or should we say hack?) of it.
We know that GPUs and CPUs often have features disabled or dialed back in order to fit a price point. We’ll show you some nifty ways to access their hidden capabilities, as well as some fixes for inherent flaws. We also know that our gear can be made to do more than it was intended to with the help of third-party software, as you’ll discover in our webcam and Roku projects. And if you want to make your smartphone smarter, increase your Wi-Fi router’s range, or RAID your SSDs, we’ll turn you on to those tricks, too.
So what are you waiting for? Let’s get hacking!
After years of AMD hyping the Bulldozer microarchitecture for its supposed efficiencies, the initial batch of Bulldozer-based AMD FX Series processors arrived with a resounding thud due to disappointing performance and relatively high power consumption. But amid all the fervor following the launch, many enthusiasts seem to have missed the fact that AMD’s current flagship FX-8150 is still technically the fastest desktop processor the company has released to date. Of course, being the fastest processor in AMD’s line-up doesn’t mean there isn’t some frequency headroom left under the hood. It turns out that the Bulldozer-based AMD FX-8150 is a pretty decent overclocker.
We all saw the reports of AMD’s Bulldozer breaking overclocking records and earning a place in the Guinness World Records. But those overclocks were performed with liquid-helium cooling, binned chips, extreme voltages, and only a single Bulldozer module (two cores) enabled. Overclocking a retail-ready processor with all of its cores enabled using more traditional cooling methods is a different story altogether. Luckily, not much has changed with Bulldozer in the overclocking department; the tried and true methods of tweaking multipliers, voltages, and the HyperTransport clock that worked with the Phenom II carry over to Bulldozer, as well.
We set out to see what kind of overclocks were possible with an AMD FX-8150 using a standard air-cooler and an AMD 990FX-based Asus motherboard. Instead of using the motherboard’s UEFI to overclock, though, we turned to AMD’s OverDrive utility, which lets users overclock from within Windows in real-time without having to reboot and waste time constantly restarting a system.
The AMD OverDrive real-time overclocking and system-monitoring utility is a great tool for tweaking the performance of a Bulldozer-based system when used in conjunction with the right motherboard.
Because FX Series processors are “unlocked,” their multipliers can be raised or lowered to increase or decrease the CPU frequency at will. Increasing or decreasing the HT clock has the same effect, and pumping more voltage into a chip will typically allow for higher frequencies, as well, provided it is adequately cooled.
The FX-8150 has a base clock of 3.6GHz, which will dip down to 1.4GHz while idling. When half (or fewer) of its cores are being utilized, the FX-8150 is able to Turbo up to a peak frequency of 4.2GHz. When all of its cores are being fully utilized, the FX-8150 can Turbo up to 3.9GHz. And while all of this is happening, the processor’s voltage will fluctuate between approximately .85v and 1.36v. By disabling Turbo, turning the voltage up beyond 1.4v, and cranking up the CPU multiplier with AMD’s OverDrive utility, all of the cores on the chip can run at even faster speeds and offer much better overall performance.
How much faster the FX-8150 will run varies from chip to chip, but we found that at 1.4125v, our CPU could reliably hit 4.41GHz (22x multiplier x 200MHz HT clock). At frequencies any higher at that voltage, the test bed was unstable, and pumping upwards of 1.5v into a 32nm chip with air cooling isn’t ideal. To hit 4.41GHz, we simply launched OverDrive, chose Advanced mode, disabled Turbo, and moved the CPU multiplier and voltage sliders as necessary. Please note that the CPU settings in your BIOS/UEFI should be set to Auto should you want to experiment with OverDrive.
When running at 4.41GHz, the FX-8150 offered up a Cinebench R11.5 score of 7.31, an increase of 1.3 points, or 21.6 percent, over the stock score of 6.01. And its temperature peaked at only 71 C. That kind of performance boost won’t allow the FX-8150 to overtake many of Intel’s faster chips, but it’s a huge gain nonetheless and one that’s easily obtained using OverDrive.
In the early stage of a processor architecture’s lifetime, it is common for multicore dies with nonfunctional cores to be harvested for use in lower-end products, while fully functional dies end up at the high-end. Over time, however, as manufacturing processes mature and yield increases, there are fewer and fewer dies with nonfunctional elements to harvest. If there is still demand for lower-cost processors, though, in lieu of expending engineering resources designing a new, cheaper-to-produce core, chip makers like AMD will often disable perfectly good cores in an existing CPU design to satiate the market.
The UEFI utility on Asus’ P9X79 Deluxe motherboard allowed us to take our Sandy Bridge-E based Core i7-3960X to 4.75GHz by altering only a few options in the Ai Tweaker menu.
Such is the case with a number of chips in AMD’s aging Phenom II product line. After years in production, yields are high on quad-core versions of the chip, but there is still a relatively large demand for cheaper, dual- and triple-core Phenom II processors. As such, many of those dual- and triple-core chips have additional cores on-die that are functional, but dormant. Manufacturers of enthusiast class motherboards, however, have devised BIOS/UEFI-level tricks to unlock those cores and turn cheap processors into something much more powerful.
Unlocking cores is very easy, provided you’ve got the right CPU and motherboard combo. Asus, Gigabyte, MSI, and other motherboard makers all offer Socket AM3/AM3+ boards with core unlocking capabilities (the boards require a SB7x0 or newer south bridge with Advanced Clock Calibration). As for the processors, many Phenom II X2, Phenom II X3, and Athlon II X3 chips are unlockable. Although not guaranteed, processors from newer batches should have no trouble being unlocked. If you’ve got one of these processors, do a quick search to see if others have had success with it—most likely they have.
To actually perform the unlock requires nothing more than entering the BIOS/UEFI, heading into the Advanced CPU configuration menu and enabling Core Unlocking or ACC (or whatever the motherboard manufacturer has called the setting). Keep in mind, though, should your chip unlock without incident, it may require increased cooling to deal with the additional active cores.
Intel’s Sandy Bridge-E is a beast of a microarchitecture. It takes many of the good things about the already awesome Sandy Bridge and vastly expands upon them. Whereas Sandy Bridge-based processors sport up to four cores, 16 lanes of integrated PCIe 2.0 connectivity, dual-channel memory controllers, and 8MB of shared L3 cache, Sandy Bridge-E features eight cores, 40 lanes of PCIe 3.0-class connectivity, quad-channel memory, and up to 20MB of shared L3. We should point out, though, that current SNB-E based desktop processors have only four or six cores enabled and up to 15MB of shared L3.
The sum of SNB-E’s parts results in the fastest desktop processors Intel has released to date. Of course, there’s always some room for improvement with a little overclocking. Unfortunately, SNB-E inherited one of SNB’s undesirable traits, as well—a finicky BCLK, or base clock. Like original Sandy Bridge-based parts, Sandy Bridge-E processors offer limited flexibility for overclocking when using BCLK manipulation. Users who want to alter CPU and memory frequencies via the BCLK are limited to just a few MHz in either direction (approximately 3-5MHz). With Sandy Bridge-E, however, two new BCLK multiples (125MHz and 166MHz) are also available that were not with Sandy Bridge, so there is some fun to be had there. Like K-Series SKUs, though, Core i7 Extreme Editions are fully unlocked, meaning CPU, Turbo, and memory clocks can all be adjusted by using different multipliers.
When overclocking SNB-E chips, power and cooling considerations are paramount. Running at its stock 3.3–3.9GHz speeds, the Core i7-3960X is rated for 130W, but power consumption and heat output increase substantially when the chip is overclocked. With that in mind, Intel and its motherboard partners have incorporated options to dynamically increase voltages when necessary and specify peak current thresholds. These new options and SNB-E’s more demanding power and heat considerations make the overclocking process somewhat more complex, but if you don’t feel like tinkering much and have a good cooler and PSU, playing with voltages and multipliers are all that is necessary to achieve some monster overclocks with SNB-E. We should also point out that although options are available to disable SpeedStep and various C states, overclocking SNB-E only requires finding the right combination of BCLK, voltage, and maximum Turbo frequencies. By altering those options alone and not messing with SpeedStep or C states, the processor can still throttle down while idle to minimize power consumption and temperatures.
To give you some examples, most SNB-E-based processors can achieve 4.5GHz with decent air or liquid cooling. A large percentage of the chips can do 4.6GHz to 4.7GHz with minimal effort, and 4.8GHz should be doable with the right voltage (1.4v to 1.5v) and high-end cooling. 5GHz-plus should also be possible with select chips and more exotic cooling.
We did some overclocking with a Core i7-3960X Extreme Edition processor and Cooler Master Hyper 212 cooler with the excellent UEFI utility on the Asus P9X79 Deluxe motherboard and were able to push our particular chip to 4.75GHz. We achieved that speed using a 125MHz BCLK strap and a peak all-core Turbo multiplier of 38 (125MHz x 38) with a peak voltage of 1.425v. At that speed and voltage, however, we were pushing the limits of the cooler— the processor would approach the 90 C mark after extended periods of sustained load, and at 91 C, SNB-E-based chips will begin to throttle.
With the processor overclocked to 4.75GHz, the Core i7-3960X Extreme Edition’s Cinebench 11.5 score jumped from 10.51 (stock frequencies) to 13.99 (overclocked frequency), an increase of 33.1 percent. Not a bad performance boost for simply altering a few settings in our motherboard’s UEFI utility.
We have had the opportunity to evaluate a number of AMD Radeon HD 6900 series cards since their release. For the most part, the cards have been well-built and stellar overall performers. A small number, however, have suffered from minor cooling-related issues that can have an adverse effect on GPU temperatures. The issues have never caused any instability or major problems to speak of, but remedying them will lower operating temperatures, which in turn can result in a quieter, more overclockable card.
On the backside of reference Radeon HD 6900 series cards, there is a metal retention/spring plate that secures the GPU heatsink in place and applies constant pressure to ensure good contact with the chip. On occasion, that spring plate hasn’t been fully tightened or was slightly bent to the point where it wasn’t applying optimal pressure. While disassembling cards to fix the issue, we’ve also found that some Radeon HD 6900 series cards have had way too much thermal paste applied to their GPUs. Ideally, only a small amount of thermal interface material should be used to facilitate heat transfer from a chip to a heatsink; a paper-thin amount is all that is necessary. But on many of the Radeons we’ve disassembled, there has been so much thermal paste installed that more has oozed out from the sides of the GPU die than is actually necessary in the first place. And having too much thermal paste applied to a chip can actually hinder cooling performance.
To ensure that an affected Radeon HD 6900 series card is optimally cooled by the stock heatsink, there are a few steps you need to take.
1 Disassemble the Card
To disassemble a reference Radeon HD 6900 series card, first remove all of the screws on the backside of the PCB that hold the rear stiffening plate in place, and then remove the plate. Then remove the two screws at the top of the case bracket above the MiniDP ports. Next, remove the four screws holding the heatsink’s spring plate in place. At this point, gently rock the entire cooler assembly and pull it away from the PCB, being careful not to yank the wires for the fan out of their connector. Once the cooler is loose, unplug the fan connector and set the cooler aside. Be careful not to remove any of the sticky thermal pads on the memory chips.
2 Re-apply Thermal Paste
With the card disassembled, you’ll want to clean off the old thermal paste. Use some isopropyl alcohol (or other cleaner safe for electronic circuits) to carefully clean all of the stock thermal paste from the GPU and heatsink’s base. Then apply a very thin layer of quality thermal paste to the GPU; the smallest amount necessary to cover the chip is all that should be used.
3 Re-attach and Adequately Tighten Spring Plate
Now the cooler can be reinstalled, but before re-attaching the spring plate make sure it is not bent or deformed in any way, and add a few thin shims to raise it slightly from the PCB and increase the spring pressure. Keep the shims very thin so as not to damage the GPU—we cut up an old rewards card from the supermarket. Stick the shims to the rubber pads on the underside of the spring plate and then put it in place and tighten the screws in a crisscross pattern. Then reinstall the backplate and you’re ready to test.
The cooling on our project card was definitely improved. Before touching it, the GPU on our 6970 idled at 44 C and peaked at about 82 C. After the mod, the GPU’s idle temp remained steady at 41 C. Peak temps still hit 82 C because the cooler is throttled based on load, but it took somewhat longer to hit the peak, and the card seemed to cool down faster, too.
Nvidia’s GeForce GTX 560 Ti, which is based on the company’s mainstream GF114 GPU, has proven to be quite a capable graphics card in its price segment. To prevent the card from being too fast and potentially encroaching on the more expensive GeForce GTX 570’s territory, however, Nvidia had to delicately balance the 560 Ti’s features and performance. The GeForce GTX 570 is powered by a pared-down version of the much more expensive GF110 GPU—to cannibalize its sales would be bad for Nvidia and its board partners, indeed.
With the right tools and a bit of experimenting, it’s easily possible to significantly boost the performance of the more affordable GeForce GTX 560 Ti. The MSI Afterburner GPU tuning utility, for example, gives users the ability to not only overclock their GeForce GTX 560 Ti cards (and many other graphics cards, as well), but also alter fan speeds and GPU voltages for even more extreme overclocks.
To illustrate just how much performance is left under the GeForce GTX 560 Ti’s hood, we grabbed MSI’s already factory-overclocked GeForce GTX 560 Ti Twin Frozr II card and did some tweaking with Afterburner. The first steps in the process are obviously downloading and installing Afterburner. We recommend downloading the latest beta build available at http://event.msi.com/vga/afterburner/index.htm to ensure the broadest compatibility. It wasn’t until Afterburner v2.2.0 beta 9 that the 560 Ti was even supported, so if you’ve got a newer graphics card, using the latest beta is the way to go.
After downloading and installing Afterburner, launch it. A simple menu with a few sliders and hardware monitoring information will open. You could begin overclocking right away, but there are some hidden options that you need to enable in order to unlock the full potential of the app. At the lower corner of the main window, click the Settings button. A new window will open with a number of options available; tick the Unlock Voltage Control and Unlock Voltage Monitoring options and then click OK. Then close and re-open Afterburner and the voltage monitor and voltage control sliders should be available.
Our particular MSI GeForce GTX 560 Ti Twin Frozr II started with a core voltage of 1000mV, or 1v (although it was reported at .95v in the voltage monitor), an 880MHz GPU clock with 1,760MHz shaders and 2,100MHz GDDR5 memory. (The actual memory clock was 1,050MHz with an effective data rate of 4.2Gb/s. Afterburner reports DDR speeds.) Like a CPU, pumping more voltage into a GPU should allow for higher frequencies—within reason—provided the GPU has adequate cooling. Without modding a card’s cooler, however, cranking up the voltage beyond about 5-9 percent isn’t advisable. The Twin Frozr II cooler from MSI is fairly capable, though, and the card has a robust voltage regulator module, so we were confident we could push our particular card much further than stock.
Without tweaking our card’s GPU core voltage, its GPU could hit 940MHz with decent stability. Increasing the GPU’s core voltage to 1.087v, however, allowed us to push the GPU all the way up to 992MHz, an increase of 112MHz (12.7 percent) over stock and 170MHz (20.6 percent) over Nvidia’s reference spec for the 560 Ti. Just for giggles, we also increased the memory clock to 2,161MHz, but didn’t have much success beyond that number. The fan speed was maxed out to ensure the lowest possible temps.
Our tweaking was well worth the effort. The stock MSI 560 Ti Twin Frozr II offered up 25fps and a score of 629 in the Unigine Heaven 2.5 benchmark and 32.2fps in Alien vs. Predator.* At its overclocked frequencies, though, the card managed 26.9fps and a score of 681 in Unigine Heaven and 34.6fps in AvP, increases of 8.2 percent and 7.5 percent, respectively. Not bad for doing little more than downloading a free utility and turning a few knobs.
Although many GeForce-based graphics-card attributes can be altered using readily available applications, or even driver control panels, some hardcore modders prefer to edit a card’s BIOS to eliminate the hassle of manually dialing in settings after updating drivers or moving a card from system to system.
Modding a GeForce card’s BIOS requires a few utilities, namely GPU-Z, NiBiTor, and NVFlash (all of which can be downloaded at www.mvktech.net and www.techpowerup.com ). Technically, GPU-Z isn’t an absolute necessity, but since NiBiTor has issues extracting BIOS files on some Windows 64-bit systems, GPU-Z can come in handy. GPU-Z is used to extract and save a card’s original BIOS, NiBiTor to edit BIOS values, and NVFlash to flash the tweaked BIOS onto the card being modded. Before performing a GPU BIOS mod, a quick disclaimer is in order. A bad flash or incorrect setting can render a graphics card unusable. In the event of a problem, installing a second card in a system can be used to restore the other card, but tread lightly and only change values you’re confident will work.
To mod a GeForce’s BIOS, fire up GPU-Z and click the button next to the BIOS version listing to extract and save the card’s original BIOS. Make a backup copy in case there’s trouble later, and then open the original file in NiBiTor. If the GeForce in question is from the pre-Fermi days, frequencies, voltages, and other options can be changed easily right on the main window’s various tabs. If you’ve got a Fermi-based GeForce, however, go to the Tools menu and choose Fermi Clocks and Fermi Voltages to alter either (min and max fan speeds can still be altered in the main interface). The frequency and voltage fields are rather cryptic for Fermi cards, though, so be sure to consult the oracle, i.e. Google, to know which to alter for your particular card, or try another utility called FermibiosCalculator to go it alone.
Once you’ve got the BIOS values tweaked to your liking, save the BIOS file in NiBiTor, copy it to a bootable disk along with NVFlash, flash the modded BIOS to your card, and you’re done.
Solid‑state drives are hot commodities for enthusiasts looking to squeeze every bit of performance from their systems. While even a single, midrange solid‑state drive is a huge upgrade over a standard hard drive, RAID-ing two (or more) SSDs can result in truly extreme performance. A pair of modern SSDs running in RAID 0, for example, can silently offer upward of 1GB/s of read bandwidth, with similar writes and nearly nonexistent access latency. That’s something no array of hard drives could even muster.
Configuring RAID on an Intel-based motherboard requires little more than entering the option ROM and working through some simple menus. AMD, Marvell, and other RAID controllers can be configured in a similar fashion, as well.
Configuring a group of solid‑state drives in RAID is no different than doing the same with standard hard drives. We used an Intel X79 Express‑based motherboard from Asus for this endeavor, but the process is similar for a wide range of boards and chipsets.
With the SSDs connected to the RAID-capable ports on your motherboard (or RAID controller card), power up the system and enter its BIOS or UEFI. Then head into the advanced configuration or integrated peripherals menu and find the SATA configuration option. Once there, set the SATA ports for RAID mode, then save the changes and exit the BIOS / UEFI.
On the next boot, you should see a prompt at some point after the POST in which you can enter the RAID configuration menu / option ROM (on our Intel-based system, we had to press Ctrl+I). Enter the option ROM and you’ll see a handful of menu items. First you’ll have to select the drives to include in the array and create the volume. Once created, you’ll then have to give the volume a name and choose the RAID mode—we chose RAID 0 for its high-performance characteristics. Then you’ll have to choose a stripe size (Intel recommends 128K for RAID 0) and finally set the total capacity of the RAID volume. Save the settings, exit the option ROM, and that’s it.
Keep in mind that you’ll also need to install the correct drivers for your OS to properly detect the array. And that Trim—an important feature of SSDs that helps maintain long-term performance—is not supported in RAID, but the idle garbage-collection algorithms available in newer SSDs will still work.
Boring video conferences and subpar facial recognition may have been the only jobs some webcams were good for, but when paired with the right software, a webcam can be transformed into a powerful security tool.
The process is straightforward, too. First, make sure your webcam is connected to your PC or network, its drivers are installed, and the cam is operational. Then download and install Vitamin D Video . The app is free for single camera installations and is available for both Mac OS X and Windows.
Vitamin D Video can transform a lowly webcam into a powerful security camera with object recognition.
Once you launch the app, a Camera Setup configuration wizard will open (if the wizard doesn’t open, the configuration can be completed by accessing the Tools menu and selecting Add Camera). From the Camera Type drop-down choose your camera type (either Network IP camera, or USB / Built-In), and then from the Camera drop-down select the specific camera you’d like to use; then click Next. Vitamin D will test the camera connection. If all is well, click the Next button; if not, ensure that your webcam is working. On the subsequent screen, give the camera location a name (e.g., Office, Front Door, etc.), click Next again, and then click Finish to complete the initial phase of the setup process.
At this point, Vitamin D is capable of capturing video, but a few more tweaks will exploit some of the app’s more powerful features. First, go to the Tools menu, select Options, and in the Options menu click the Move Video button to specify a directory other than the default option in which to save your videos. Then switch from the Preview pane to the Search pane by clicking the spyglass at the upper-left of the interface, and click the + button to define some rules. The app can track moving objects or specific shapes and distinguish between cars and people, for example, or even capture objects as they pass through specified thresholds. Once you’ve got your rules set up, switch on your camera in the app, and it’ll capture video as long as the camera is connected.
Over the last few years, as smartphones have gotten more powerful and prolific, wireless carriers have begun polluting them with more and more bloatware, not to mention abhorrent software like Carrier IQ. Perhaps it’s the unwanted software or the common enthusiast trait of craving total control over our devices, but rooting or jailbreaking smartphones and flashing custom firmware, aka ROMs, has become increasingly more popular. Not only can flashing a smartphone with a custom ROM eliminate objectionable bloatware, it can also add new features (like Wi-Fi tethering), and potentially increase performance.
The process of flashing a custom ROM to a smartphone is far from universal. In fact, there can sometimes be multiple methods for flashing a single phone. But there are some common threads among the myriad devices. Usually, you’ll have to obtain root access, somehow set the phone to accept downloads, use some sort of utility to flash a custom recovery image, and then from within the recovery image, flash the new ROM.
To demonstrate the process, we used a fairly new Samsung Galaxy S II i727 Skyrocket and an alpha build of the excellent CyanogenMod firmware . For specifics regarding your particular phone, we suggest perusing the forums at the smartphone-enthusiast website xda-developers , which has the goods on virtually all of the popular smartphones available.
1 Gather Files and Set Phone to Download Mode To flash the SGSII Skyrocket, we first must grab all of the necessary files, which consist of Odin v1.85, USB drivers for the SGSII, the custom CWM recovery image, and the alpha build of CyanogenMod—all of which are linked in the Skyrocket sub-forum on the xda-developers site. With all of the files in hand, we next have to put our phone in download mode. To do so, we have to shut the phone off (remove and then reinsert the battery to be sure) and then hold down both the Volume Up and Volume Down buttons, while simultaneously plugging in its USB cable (the other end of which is plugged into a Windows 7 system). Once in download mode, we release the buttons and then hit the Volume Up button one more time to bypass an onscreen warning message and continue on. Next, we install the USB drivers for the phone and once the phone is properly detected by Windows, we fire up Odin. If your phone is in download mode and its drivers are correctly installed, Odin should identify a device on a virtual com port. Here, our Samsung Galaxy S II i727 Skyrocket is identified as device 0 on COM5.
2 Flash the Recovery Image With the proper drivers installed and the phone plugged in, Odin immediately recognizes the device and is ready to flash the new recovery image. Odin is capable of flashing bootloaders, recovery images, modems, and some carrier-specific files. For our purposes, we only need to flash the recovery image. To flash the recovery image, we tick the field in Odin labeled PDA and then browse to the CWM recovery image file we had downloaded. With the file selected, we click Start and Odin does the rest. A few moments later the recovery image with CWM is flashed to the phone.
3 Flash the CyanogenMod Image With the CWM recovery image installed, we’re ready to flash CyanogenMod. First, we power the phone off again and remove the phone’s MicroSD card. We then copy the CyanogenMod image to the MicroSD card, reinsert it into the phone, and enter CWM by holding both the Volume Up and Volume Down buttons and simultaneously pressing and holding the power button until the Samsung logo flashes twice on screen. Once in CWM, we clear all of the phone’s caches and user data, and follow the onscreen menus to select and flash the CyanogenMod image. Upon restarting the phone, we are done.
We have explained how to install and use open-source firmware such as DD-WRT and Tomato on wireless broadband routers on multiple occasions, so we won’t go in-depth on the topic again here. Suffice it to say that using open-source firmware can enhance the performance and stability of a wireless network, enable a number of useful features, and give users a level of control that’s unheard of with the stock, bare-bones firmware included with many routers.
Open‑source firmware like DD-WRT gives users the ability to boost the transmit power on many popular wireless routers, which can help increase their range.
Simply installing open-source firmware isn’t enough to fully optimize your wireless connection, however. It’s a key component to a fast, stable wireless network, yes. But there are changes to the default settings within the firmware and commonsense environmental tweaks you can make that all incrementally enhance overall performance. Within the advanced wireless configuration menus in DD-WRT, for example, there is a setting labeled TX Power, which alters the transmit power of the router. Increase the value in the TX Power field and—you guessed it—the router will transmit a stronger, farther-reaching signal. There are some caveats to altering the TX Power value, though. First off, increasing the value too much can fry the router’s radio, so do some research to find the optimal value for your particular router. Although the default value (usually between 40mW and 71mW) may seem low when a maximum value of 251mW is available, cranking things up to the maximum is not advisable. Excessively high TX Power values can not only damage the router’s radio, but will also introduce unwanted electrical noise and instability. We’d recommend increasing the TX Power by only a couple percentage points for a small boost in output. If you’ve got a device that sits on the fringe of your wireless network, a small bump in TX Power may be enough to rope it in with a stronger signal—provided its range isn’t limited by its own capabilities.
Another firmware tweak to consider is disabling unneeded wireless modes. If all you’ve got are N-mode capable devices on the network, there’s no need to enable B and G modes, which can drag down peak performance. We also recommend scanning your environment with a utility like inSSIDer to see which wireless channel has the least amount of traffic, and configuring the router to use that channel.
Using a commonsense approach to your router’s positioning can also have a huge impact on wireless performance. It’s best to keep other wireless devices that may use similar frequency bands, like cordless phones, as far away from the router as possible to minimize the chance of interference. Microwave ovens are also notorious for interfering with Wi-Fi signals. Finally, if possible, position your router in as central a location as possible if you’ve got to service a large area. Most routers ship with omnidirectional antennas that transmit signals in all directions, so it’s best to minimize the distance between the router and any devices that are going to connect to it. The ideal way to do that is to (obviously) make the router the central point between all of your wireless devices.
Firmware tweaks and repositioning a wireless access point or router can only do so much to improve wireless performance and reception. To take wireless performance to the next level requires upgrading the router’s antennas.
Wi-Fi antennas come in all shapes and sizes, from corner and window mounts, to elongated pole mounts, indoor and outdoor models, and simple replacement rubber-duckies. All of the antennas, however, will typically fall within one of two categories: directional or omnidirectional. The vast majority of antennas included with virtually all commercially available routers and access points (if it has external antennas) are omnidirectional, which is to say they transmit and receive wireless signals from all directions. Conversely, directional antennas better receive and focus transmitted signals in a single direction. Wi-Fi antennas will also carry some sort of dBi rating, which is a measure of the antenna’s gain. Higher-gain antennas typically offer longer range and better performance over a given distance in comparison to lower-gain antennas.
Which antenna is best suited to a particular application will be determined by the environment and positioning of the router or AP. A centrally located router in a relatively open area that’s servicing systems all around it, for example, would be best served by a high-gain omnidirectional antenna. A router sitting in the corner of a building, which has to transmit a signal to the opposite corner, would benefit from the focused signal of a directional antenna pointed in the proper direction.
Whichever antenna option is best for your setup, one thing is certain: Replacing the basic antennas that come included with most routers with high-gain alternatives will result in an immediate and noticeable improvement in signal quality and range, which ultimately improves performance, as well.
The various Roku set-top devices are some slick pieces of hardware. The diminutive boxes are affordable and allow users to stream a plethora of digital content. As cool as many of the channels on the Roku are, though, not being able to easily stream content from a home network out of the box is somewhat disappointing. Many of us have amassed huge collections of digital media, after all; being able to stream that media from our PCs to a TV with a sub-$100 box the size of a Klondike bar would be handy, indeed.
Roksbox gives users the ability to stream their own personal content to the Roku player, provided the files are stored on a network-accessible PC and are of the correct format.
There is a way to stream personal files to a Roku, however. It’s a bit of a pain to configure and costs a few bucks, but in the end it’s worth it. By installing the Roksbox channel to your Roku (30 days free, $15 lifetime), strategically arranging your files with a specific directory structure, and setting up what’s essentially a web server, Roksbox will allow you to “tune in” to your own personal channel.
Before we explain how to use Roksbox with your Roku, we’re going to assume you’ve already got your device up and running and have an account registered on the Roku site. If so, head on over to the Roksbox site and click the link at the lower-left labeled “Add Roksbox Channel to Your Roku Player.” You’ll be taken to a new page and presented with a personalized URL that, when clicked, will take you to the Roku site and add the channel to your selections. Please note: It can take a few minutes (sometimes longer) for the channel to appear in your Roku’s interface after being added, so be patient. Once the Roksbox channel is listed in your selections, go to the channel store on the Roku player and select Roksbox.
With the Roksbox channel installed, the next step is to organize your media with a folder structure Roksbox can handle. By default, the channel looks for videos, music, and image files in your Videos, Music, and Photos directories. Create a folder with any name you like (we chose D:\media\) on the machine that’ll be serving the files. Inside it, create the Videos, Music, and Photos directories, and copy your media into the folders. Keep in mind, the Roku will only stream MP4 (H.264), MOV (H.264), and WMV/ASF (WMV9/VC-1) video files, AAC and MP3 audio, and JPG and PNG images. MKV files are supported only through the local USB port on the Roku.
The next step is to set up the web server. Windows 7 systems have IIS 7.5 built in; it just has to be enabled. If you prefer to use a different web server, Mongoose or Apache, for example, those will work, too. Our media was on a Windows machine, so we stuck with IIS. To enable IIS on Windows 7, click your Start button and type Windows Features into the search field. The Windows Features menu will open. In it, tick Internet Information Services and then tunnel down in the menu and also tick Basic Authentication. Then click OK and IIS will be installed. Note, if you have a firewall enabled, exclude IIS so traffic can pass through on port 80.
All of the web server’s options can be configured through the IIS Manger utility, which is available under Administrative Tools after enabling the feature in Windows.
Next, head into Administrative Tools (go to Control Panel, then System and Security) and launch Internet Information Services (IIS) Manager. When the IIS menu opens, select Default Web Site from the Connections menu, and then click Advanced Settings in the Actions menu. In the window that opens, alter the Physical Path option so that it points to the directory where your media was copied and click OK. Then double-click Directory Browsing in the IIS window and enable it. Next, double-click MIME Types and add the MP4 extension, with the type “video/mpeg.”
The last three steps are to enable Basic Authentication, disable Anonymous Authentication, and to create a user name with password (if you don’t already have one) for login. Select Default Web Site from the Connections menu again and double-click Authentication; you can enable Basic Authentication and disable Anonymous Authentication from there. Once done, close IIS manager and create a new user account if necessary.
With the web server configured, head into the Roksbox settings on your Roku and point it to the IP address of your server. It’ll ask for login credentials, and if all is well, you’ll then be able to browse the files in the Videos, Music, and Photos directories and stream away. Should you run into any issues, there is an extensive step-by-step How To guide available on the Roksbox site.