When man first booted the PC, he saw the BIOS screen: a jumble of monochromatic numbers that made about as much sense as the binary language of load lifters. Sadly, not much about the BIOS has changed since the DeLorean and skinny ties were cool. Decades later, in our modern, visual-based world, we’re still greeted with a screen full of text from machines 1,000 times faster than those that were around when the ol’ BIOS was born.
Most PC lightweights simply ignore the BIOS and wait for their OSes to take over. Power users, however, know that the BIOS can be a friendly and rewarding place to go spelunking.
So just what the hell is the BIOS? Short for Basic Input Output System, the BIOS is a tiny bit of software embedded in your motherboard that gets executed when your PC is turned on. The BIOS is responsible for chores such as sizing up the amount of available RAM, detecting the hard drives, and setting the CPU speed. Once the system house-cleaning is done, the BIOS boots the OS from the hard drive and hands over control.
Even though there are only two BIOS makers for consumer desktops today, AMI and Award/Phoenix, a multitude of BIOS variants exists. In fact, a Gigabyte board using an Award BIOS can bear little resemblance to an Asus board using an Award BIOS.
In motherboards designed for enthusiasts, board makers typically unmask as many controls as possible. Unfortunately, the dizzying array of options includes both safe and unsafe tweaks. While some tweaks will just leave you with a system that refuses to boot, others can do long-term harm.
So if you feel the least bit uneasy about even changing the boot order of your rig’s drives, you may not want to muck around too much in the BIOS. If, however, you’re comfortable with the prospect of a little trial and error, it’s time you dive in and discover the many secrets your BIOS holds.
How do you get into your BIOS? Reboot the system and then hit DEL, F1, or F2 within a few seconds of the machine POSTing. If your machine has a splash screen that doesn’t show anything, try hitting Escape, which should reveal the ugly DOS-looking screen underneath. Only Intel-branded boards still require jumpers to be thrown to access all of the BIOS features. Power down, look for the BIOS Setup Configuration Jumper, set it to 2-3, and power up.
The days of just selecting your RAM speed are gone. A modern BIOS exposes enough RAM controls to give even the most seasoned hobbyist a headache. For the die-hard enthusiast though, those knobs and switches also mean something good: control.
Poke around the BIOS of a budget board or an OEM machine and you’ll find it as easy to understand as “The Pet Goat.” Heck, even an enthusiast board from three years ago could be understood by most advanced users, as the memory options were as simple as DDR333 or DDR400. Today, we’re not even sure that the engineers who write BIOSes fully understand all of the options available. Take, for example, DQS Drive Strength or Process On-die Term B. Huh? Both actually relate to the ability to tweak and tune your RAM to higher frequencies, but for the most part, you can ignore them unless you really want to spend an entire afternoon setting, crashing, and resetting your machine.
Fortunately, the fundamentals are still as valid today as they were a couple of years ago: Column Access Strobe Latency (tCL), Row Access Strobe to Column Access Strobe Delay (tRCD), Row Access Strobe Precharge (tRP), Row Access Strobe Precharge and Precharge Delay (tRAS), and Command Rate or Command Per Clock (CMD). In the BIOS, you’re able to tweak the timing for each of these settings to affect RAM performance.
If you think of RAM as a collection of books in a public library, each timing setting relates to an element of the librarian fulfilling your request for a particular tome. The timing is described in clock cycles, so a lower number equals a faster time.
The tRCD setting, for example, describes how much time the librarian has to get to a certain row on a shelving column. Set it too low, and she can’t get to the row where your desired book resides.
Say she reaches the row; the tRAS determines the time the librarian can linger there finding your book. tRP is how much time the librarian has to get from the row she was at to the bottom of the ladder.
tCL is how much time she has to move between the different shelves of books. Setting it too low would be like asking her to push a 30-foot rolling ladder 100 yards in 2 seconds. tRAS is basically how much time the overall operation takes to climb the ladder, get the book, and get off the ladder.
CMD describes the amount of time between one request and the next.
There are two approaches to setting these values: The first is to match them with the timings on your RAM (assuming your RAM provides those settings—commodity RAM doesn’t always list specs). If you paid extra for those fancy high-performance modules, you’re getting more than just a shiny aluminum heat spreader, you’re also getting RAM that’s been tested and binned to run at optimal speeds. If you peer at the label of most enthusiast RAM, you’ll see timing settings of 5-5-5-15-2T.
Translated for your BIOS, that means a tCL of 5, a tRCD of 5, a tRP of 5, a tRAS of 15, and a CMD of 2T. The other method is to let the chipset determine the settings automatically. For example, you could enable SLI memory mode on nForce boards, which would give you optimum settings if the modules support Nvidia’s Enhanced Performance Profiles (EPP). Intel has a similar feature call XMP.
There’s more to getting your high-performance RAM to run at its rated speed though. The RAM manufacturer specs for timing require the RAM to run at its rated clock speed (see below) and at a certain voltage (see page 62).
To make sure your RAM is set to the correct clock speed in the BIOS, you’ll need to first know your RAM’s overall bandwidth rating. If it’s expressed as PC3200 or PC6400, you can find out your RAM’s clock speed by dividing by eight. So 3200 becomes 400, or 400MHz, and 6400 becomes 800, or 800MHz. Most memory vendors will actually list the module’s overall bandwidth—say, PC8500—along with the rated clock speed—1066MHz, in this case. When it comes to manually setting your RAM’s clock speed in the BIOS, you’ll find the process differs among chipset vendors. On Intel chipsets, where the memory controller is still in the chipset and RAM is tied to the front-side bus, it gets a bit confusing: If you overclock your CPU’s front-side bus, your RAM’s clock speed will be automatically overclocked along with it. This could cause problems if the RAM’s speed is set beyond its rating. (See the North Bridge Strap section on this page to learn how to compensate for this issue.)
With Nvidia’s nForce series chipsets, you can actually unlink the FSB from the RAM. This lets you independently set the clock speed for the front-side bus to, say, 1066MHz, and the RAM to 800MHz. The nForce also lets you run the two in linked mode using traditional ratios of 1:1, 5:4, 3:2, and sync. These set the RAM speed based on a ratio related to the speed of the front-side bus. If you’re running a 1066MHz FSB CPU and a 1:1 ratio, your RAM would run at 1066. At 5:4, the RAM is 853, and at 3:2 it’s 711. Sync would set the RAM at 533. Various vendors pitch linked mode as the best way to set RAM, but we’ve come to settle on getting the highest reliable front-side bus speed with the RAM speed that works best for you. Remember:
Just because your RAM is rated to run at, say, 1100MHz, doesn’t guarantee best results at that speed. Since the interaction between memory, chipset, and CPU will greatly depend on what you’re doing, there is no one-size-fits-all answer. Get out the game or application you use the most and tweak the memory settings until you find the optimal solution.
Fairly new to Intel-chipset boards is a feature known generically as the north bridge strap—Asus calls it the FSB Strap to Northbridge and Gigabyte calls it the System Memory Multiplier—and it can throw us old-timers for a loop. The north bridge is actually its own little processor, which, on Intel chipsets, is tied, or “strapped,” to the front-side bus and memory. It’s possible to change the speed of the strap—both Asus and Gigabyte, for example, let you manually select strap speeds from 200MHz to 400MHz.
There are two practical uses for this. First, by manually setting the speed of the north bridge strap, you can change the memory clock speeds available on the board. As mentioned above, simply increasing the front-side bus speed will automatically increase the speed of the memory—perhaps far beyond what your module is rated for. By notching the strap down, you can get your RAM operating within spec while leaving the FSB at its overclocked state. Why not just let you pick the RAM speed you want and be done with it? The theory is that the straps are already preconfigured to offer the best performance ratios, which are preferable to those you set on your own.
The second purpose of the strap: The internal clock in the north bridge will gradually tick up as you increase the front-side bus of your system. It’s somewhat similar to the gear ratios in a car. As you rev up the front-side-bus speed, the rpms of the north bridge can get out of spec and cause a crash. The strap will adjust the speed of the north bridge clock independent of the FSB. The general rule of thumb for overclockers is to use the lowest strap available that runs your RAM at the speed you need. This should enable higher front-side bus overclocks.
The upshot of this is to run in auto mode if you’re not overclocking and leave it to the board engineers. If, however, you are overclocking and seemingly hitting a front-side bus wall that no amount of voltage will address, try lowering the north bridge strap to see if you can push the FSB even higher.
If you’re an AMD user and you’re confused by all this north bridge strap stuff, you can just ignore it. Since Phenom CPUs feature the memory controller directly in the CPU core, there is no memory controller strap to futz with. What is confusing is the ganged or unganged mode available in Phenom boards. Phenom CPUs feature two separate memory controllers that can be run ganged or unganged. Generally, you’ll want to run as unganged to let the controllers operate independently for best performance.
Some motherboards have begun offering the ability to tweak the “clock skew” for RAM. In a nutshell, clock skew is the variation in speed of a module’s individual signal paths to the memory controller. Skew is the result of the signal distortion caused by the traces in the motherboard, the cleanliness of the power going to the board, and the RAM that’s in use.
Tweaking the skew settings can help increase stability when you’re pushing the chipset and RAM to its limits by overclocking. It’s a game of trial and error with skew settings, but if you’ve got the time and energy, it could help you achieve the few extra megahertz you were hoping to get out of your system—just be ready to roll up your sleeves and run the POST, crash, reset, POST routine. If you’re not overclocking, however, you can just ignore these timings.
When the BIOS is finished prepping the hardware, it doesn’t necessarily have to hand control over to the OS. Instead, many companies are now inserting a pre-OS, or preboot, environment on their boards that the PC can boot to before the OS.
These environments are stored on small bits of flash RAM embedded on the motherboard and can contain a basic Internet browser, Skype client, and even the ability to access your Outlook email and contacts. Although referred to as a pre-OS, the majority of these environments are embedded Linux.
The feature has long been found in notebooks, but it’s now migrating to desktop motherboards. Currently, Asus is the primary adopter of the pre-OS and has it in many of its motherboards. In our experience, it’s a novelty that can occasionally be useful—say, for example, you need information from the Internet faster than you can wait for the OS to load. With Asus’s ExpressGate pre-OS you can be in a browser in one minute instead of five. Granted, that’s a rare need, but we can see a pre-OS browser being useful for, say, downloading utilities, drivers, or fixes to a broken or infected OS on the hard disk—though currently, none of the implementations we’ve seen allows you to save files to your machine.
There's more to prepping a rig for a new CPU than just setting the FSB
If you’re used to poking around the BIOS, you don’t need to be told that the CPU’s overall clock speed is determined by multiplying the CPU’s clock multiplier by the front-side bus. In other words, the overall clock speed of a CPU with an 8x multiplier and a front-side bus of 400 is 3200MHz.
What you might not know is the purpose of some of the more obscure CPU-related BIOS settings. Both C1E and EIST relate to power-saving techniques employed by Intel CPUs. EIST, or Enhanced Intel SpeedStep Technology, is an offshoot of the notebook SpeedStep feature that lowers the CPU speed when it’s not under heavy use. C1E is an enhanced halt state that cuts the clock multiplier in the CPU to a preset value when the OS tells the chip that it has no work for it. Each has pros and cons. EIST is known for greater granularity, ramping up and down depending on load, but it does require driver support in the OS to manage it. Critics say EIST can actually reduce performance since it’s designed to operate the CPU at lower speeds whenever it’s not running at 100 percent capacity. The C1E state is issued by the OS when it’s idle, so C1E doesn’t require quite as much management. But some overclockers prefer to disable C1E since it can interfere with overclocks. We’ve seen older boards feature settings for both, but in our experience, newer chipsets from Intel contain settings only for C1E. Flipping off the features will force the CPU to always run at its maximum clock speed. Phenoms have similar features with Cool’n’Quiet (akin to EIST) and now C1E support. While you’re not supposed to, we’ve run with both settings on without issues, but your mileage may vary.
New CPUs include thermal sensors that slow down the CPU when it overheats. If you’d rather have your machine bluescreen instead of slow down (perhaps for stress testing), you can switch CPU Thermal Control in the BIOS to Off. NForce chipsets actually let you select between lowering the CPU clock speed, or cutting the multiplier and voltage, or both. Since we’d rather lose performance than outright crash, we normally set the BIOS so the clock speeds drop.
One CPU setting that can probably be turned off by most folks is VT, aka Vanderpool or Virtualization support. The setting enables virtualization hardware support in the CPU for, well, virtualization. It basically turns on the hardware “acceleration” capabilities when using such applications as VMWare or Virtual PC. If you don’t run virtualization, it’s completely unnecessary. If you do, well, don’t expect miracles since hardware acceleration of virtualization is still in its early phases.
The Execute Disable option is a switch in the BIOS that prevents many buffer overflow attacks, whereby malicious programs are able to circumvent security by putting viral code in RAM and executing it by intentionally overflowing the buffer. AMD created the feature and calls it NX. Intel’s clone of it is called XD. Both do the same thing. There’s some disagreement whether it hurts or helps though. Some people have reported problems with overclocking when Execute Disable is on, while others claim it’s not an issue.
Our take is to leave it on unless you’re specifically having problems with it—on the other hand, we’re skeptical whether the feature makes a lick of difference. If it did, wouldn’t it make Windows XP SP2 machines totally secure? Right.
To verify that hardware data execution protection is enabled, go into Windows, hit Start, then Run, and type CMD. Enter the command wmic OS Get DataExecutionPrevention_Available. The response should be “true.” Or simply download Gibson Research’s SecurAble, which will scan your machine to verify protection.
Just because it’s in the BIOS doesn’t mean you should touch it. Such is the case with PCI Express overclocking. Notoriously finicky and known to cause crashing, overclocking the PCI Express bus in the hopes of getting more GPU performance rarely ends well. In many cases, overclocking the PCI-E bus by even 1MHz beyond its stock 100MHz can cause instability.
Want a really good example? Nvidia made much hay of the Linkboost feature in its 590 SLI and 680i SLI chipsets. Linkboost would automatically overclock the PCI Express slot by up to 25 percent when paired with GeForce cards.
We never could understand the need for Linkboost, as PCI Express bandwidth was so great to begin with. Nvidia must agree now too. The company has removed the feature completely from the newer BIOSes for those motherboards.
You might also be tempted to disable USB legacy support since the feature lets USB keyboards and mice work in DOS mode, and, well, who the hell runs DOS anymore? You do—if you boot into safe mode. With USB legacy support disabled in safe mode, your USB input devices would be rendered useless.
They say no pain, no gain. But it’s really no voltage tweaks, no high overclocks. While the risks are great, overvolting can pay some great rewards
A reader recently asked us whether heat or voltage was more dangerous to a CPU. Hands down, we’d say voltage is far more dangerous. All modern CPUs have a built-in limiter that throttles the CPU if it overheats. The same is not true when a chip receives more voltage than it was designed for. Clearly, this is the most dangerous part of mucking around in the BIOS. If you’re faint of heart and don’t like to break things, don’t mess with voltage tweaks. However, if you’re looking for that extra bit of performance, voltage tweaks are often the only way to get there.
Modern motherboards will let you turn all kinds of voltage knobs, but the basics are CPU core voltage, RAM, and chipset.
If you read our sections on memory timing and speed, there’s one very important fact you need to know: You’ll likely need to overvolt your high-performance RAM modules to hit their rated speeds. DDR400 officially tops out at 400MHz, and DDR2 tops out at 800MHz. Anything higher is technically beyond JEDEC specification and invariably requires overvoltage to hit. In fact, you’ll notice that much of the high-performance RAM today will include recommended voltage settings needed to hit the clock speed and timings it boasts.
DDR’s spec’d voltage is 2.5 volts. DDR2’s is 1.8 volts, and DDR3’s is 1.5 volts. To give you an idea of how much additional voltage you need to overclock RAM, consider this: To get a typical DDR2 DIMM to go from DDR2/800 to DDR2/1066, you have to push the voltage to 2.20 volts. To get a DDR3 module to reach all the way to DDR3/1800, you have to push two volts. If you ask us, that’s an awful lot of voltage, and your modules probably aren’t going to last several years at those levels. On the other hand, what enthusiast is going to run the same RAM for five years anyway?
Which bring us to the age-old question: “How much voltage do I run?” For RAM, we recommend that you follow the manufacturer’s settings, as that will be the best indicator of the module’s overclocked speed and voltage needs. For CPUs, it’s chip dependent. One way to judge how far you can push your chip’s voltage is to cruise forums at MaximumPC.com, Anandtech.com, HardOCP.com, or any of the numerous forum boards out there to see what people are running for your particular CPU. One new development we like is the danger gradations in some vendor’s newer BIOSes.
Older BIOSes simply let you select how much additional voltage to add to the CPU without any regard for the risk. Some newer BIOSes will actually indicate by color how hazardous your voltage increase is. Gray is mostly safe while red indicates nuke potential. Since we figure the board engineers are basing their threat levels on lookup tables based on the CPUs themselves, we feel pretty confident cranking up CPU voltage to just below the red zone.
BIOSes today also let you increase voltage to the north bridge and south bridge separately, and in most nForce boards, even the HyperTransport link between the north bridge and south bridge can be overvolted. Do you really need to do this? We’ve found that, yes, you do need to goose the north bridge voltage on occasion to get stable upgrades, but like CPU and RAM overvolting, it’s quite risky and can damage the board when done without caution. Take our previous advice: See what works for others before jumping in with both feet.
Before you POST your new system to install the OS, you should disable unneeded ports and make your decision to run either AHCI or IDE
When we build a new system, one of the first things we do is flip through the BIOS, turning off things we know we won’t ever use, such as the serial port and parallel port. If your system doesn’t include a floppy drive (some still do), we also flip off the floppy controller in the BIOS. Turning these features off saves some system resources, but it mostly just makes us feel good.
If you dig into your BIOS you’ll also see a setting that lets you configure your SATA ports as IDE, RAID, or AHCI. Default should be IDE and most people understand that setting RAID turns on the RAID features of the chipset, but just what is AHCI? It’s the Intel specification dubbed Advanced Host Controller Interface that enables such fancy features as native command queuing and hot-plugging of SATA devices. If you leave AHCI off, your drives will run in an emulated IDE mode.
The rub is that AHCI is not supported in Windows XP natively. You will have to use a floppy drive and F6 drivers or create a slipstreamed version of XP with AHCI drivers just to install the OS. If you already have Windows XP installed, flipping on AHCI will prevent the OS from loading. It’s also not clear what level of AHCI support Vista has, but if you install with AHCI on, you don’t need to install drivers. If you install Vista in IDE mode, however, and then turn on AHCI mode in the BIOS, the OS bluescreens.
Do NCQ and hot-plugging make AHCI worthwhile? For the most part, no. NCQ can actually hurt performance in some situations. Still, there have been online reports of chipsets performing quite poorly unless AHCI is enabled. AHCI is supported only by Intel and ATI at this point and not by Nvidia.
The BIOS is older than many of the people who actually use a PC, so why in this age of 3D-accelerated 64-bit operating systems are we still using a line-based interface and 16-bit real-mode BIOS? That’s a conundrum the industry is hoping to fix with the Unified Extensible Firmware Interface, which may well replace many of the things the BIOS does today. An obvious advantage of UEFI is that it supports a GUI and mouse controls. UEFI would also be processor agnostic, use higher-level languages such as C++ instead of assembly language, and pretty much make booting your PC more like, well, booting a Mac.
It won’t happen overnight, though. Few desktop motherboard vendors beyond MSI have hopped onto UEFI and only the 64-bit flavor of Windows Vista SP1 supports it. Even if UEFI takes off, the BIOS will not totally go away. It’ll just get a serious demotion to doing very basic power on self-tests before handing over control to UEFI. The difference is that you may access those familiar controls using a UEFI GUI interface, which will also roll in pre-OS applications as well.