Touch screens may never replace our clicky-clacky QWERTY keyboards—no, we’ll have to wait for brain-stem probes for that—but they are becoming more common. In fact, devices using this technology have been in use for more than 35 years and are becoming ubiquitous—kiosks, tablet PCs, desktop computers, and many handheld devices all now rely on human touch.
While the end result is the same—a display surface maps the coordinates of an input—touch screens rely on different phenomena to perform their functions, ranging from electrical current to infrared light to sound waves.
A resistive touch screen sandwiches several thin, transparent layers of material over an LCD or CRT. The bottom layer transmits a small electrical current along an X and Y path. Sensors monitor these voltage streams, waiting for an interruption. When the flexible top layer is pressed, the two layers connect to form a new circuit. Sensors measure the change in voltage, triangulating the position to X and Y coordinates.
Resistive touch screens work with any kind of input, including a stylus or finger, and they’re usually very inexpensive to manufacture. They’re less durable than other types of touch screens, however, because the topmost layer experiences a great deal of wear from physical contact and constant flexing. Longevity isn’t a big problem for tablet PC and PDA deployments—two of the most common applications for resistive technology—but it can be for public kiosks, which are expected to endure more than 35 million impacts over their lifetimes.
Capacitive screens move the electrical layer to the top of the display. A minimal current is broadcast and measured from the corners of the monitor. When a person touches the screen, a small amount of the current is drawn away by the body’s natural capacitance. The sensors measure the relative loss of the current and a microcontroller triangulates the point where the finger made contact.
Capacitive screens are more durable than resistive screens because their top layers are fabricated from rigid glass. They are typically easier to read because thin layers of material aren’t on top of the display surface. The need for a live fingertip, however, often makes them feel less accurate to the end user than a stylus-driven interface. Trackpads and handheld devices, such as Apple’s iPod Touch and iPhone, commonly use capacitive input.
Surface acoustic wave (SAW) screens use beams of ultrasonic waves to form a grid over the surface of a display. Sensors along the X and Y axes monitor the waves; when one is broken, the X and Y points are combined to identify the touch coordinate.
SAW screens, like their capacitive counterparts, are durable and provide a clear line of sight to the display image, but the former work with any kind of input, be it a fingertip, a fingernail, or a stylus. On the other hand, they’re more susceptible to interference from dirt and other foreign objects that accumulate on the screen, registering surface contaminants as points of contact.
Infrared touch screens are similar to SAW screens in that they use a ring of sensors and receivers to form an X/Y grid over a display. But instead of sending electrical current or sound waves across this grid, infrared LEDs shoot invisible beams over the surface of the display. The microcontroller simply calculates which X and Y lines were broken to determine the point of input.
|A frame around the display houses LEDs and photoreceptors on opposite sides. The LEDs emit light, which is detected by the photoreceptors. The display identifies X and Y coordinates when the user’s fingertip blocks one or more of the beams.|
These screens work with a stylus, finger, or other pointer and give an unobstructed view of the display. They’re also durable because the point of input is registered just above the glass screen; only incidental contact is needed. Military applications often use infrared screens because of the product’s longevity.
Infrared imaging touch screens are vastly different from touch screens that use traditional infrared input. IR imaging screens use two or more embedded cameras to visually monitor the screen’s surface. IR beams are transmitted away from the cameras, illuminating the outside layer of the display. When the beams are disrupted by a fingertip or a stylus, the cameras measure the angle of the object’s shadow and its distance from the camera to triangulate the disruption.
IR imaging allows a direct view of the display. And since the input is registered just above the glass, physical contact is not required to initiate action. HP’s TouchSmart IQ770, one of the first mass-market touch-screen computers designed for the home, features this technology. HP markets the TouchSmart as an in-home kiosk that families can use for quick tasks without necessarily having to rely on the mouse and keyboard for navigation.
While all the other touch-screen technologies rely on transmitting a wave or current, acoustic pulse screens just listen, literally. Two or more receivers are mounted at the edges of the screen to monitor contact. The tap of a finger, stylus, or other pointer makes a small sound vibration, which the display then triangulates. Based on the relative volume of the sound and other factors, the display can quickly determine where on the surface the input occurred. These types of touch screens are particularly useful in public kiosks, not because they’re impervious to surface scratches, but because the scratches don’t interfere with the screen’s ability to detect contact.
Capacitive and resistive touch screens will likely continue to be the most commonly implemented technologies because of their low cost. They can even be combined into a single display, producing ideal fingertip or stylus input depending on the situation. However, we expect optical tracking to become more common because of its accuracy and flexibility.
Microsoft’s newly released Surface PC hides IR cameras beneath a tabletop screen. These cameras work similarly to IR imaging systems, but they monitor display interactions from below instead of from the side. This perspective allows the computer to visually identify input, offering a different interface depending on what is placed on the screen.
Surface, and other new displays are also embracing multitouch input. (The iPhone brought multitouch to the masses, although the technology has existed for 25 years.) Since the Surface PC can identify multiple fingers, it can let more than one user operate it at a time. Or single users can use multi-finger gestures to resize and manipulate items on the screen.
Nearly any of the touch-screen technologies can use multitouch input; however, some need additional sensors to help triangulate simultaneous inputs. The iPhone and the iPod Touch, for example, use a capacitive touch screen with coordinate-based inputs versus axis-based inputs. This allows two touches along the same axis—which would cause problems with certain capacitance touch-screen designs—to be registered as independent points of contact.
While the technologies may differ, we look forward to touch screens filling up walls and tables in our homes and offices. At that point, simple, direct interaction will beat traditional input methods. Who wants to carry a mouse around the house when a personal touch will do?