In February, Blizzard opened the Starcraft II battlenet for beta-testing. Hooray for Blizzard!
There are many good reasons for beta-testing. It’s not always about bug-stomping. In this case, it’s just as important to have the playability of the game tested under actual user conditions. This is the same kind of beta-testing Blizzard did with Warcraft III before it was released. Throughout the beta-test, they continued to tweak the balance of Warcraft III units. At the time of this writing, Blizzard has already made several changes in Starcraft II unit abilities, build times, and strength.
Starcraft II looks spectacular, of course. It has marvelous 3D graphics, terrific sound effects, great music, and the game-play is very exciting. (If I have a complaint, it’s not about the game, but about the tactics of some players — the ones who are so eager to annihilate the other guy right from the git-go that neither side gets a chance to experience some of the advanced tactical possibilities. The game is over before you’ve built your first Thor. I had this same complaint about some players in Warcraft III.)
But this isn’t a review, it can’t be because the game isn’t officially released yet, and when it is officially released, there will be so many other people writing reviews that anything I might say here would be redundant. That disclaimer aside, I will say that so far Starcraft II is everything I hoped it would be. I’m sure that one of the benefits Blizzard enjoys from an open-beta like this is that it gets the fan-base so excited and enthusiastic that they’ll be lining up at the stores the day the boxes hit the shelves.
At the 2009 CES, Sony and Panasonic showed 3D HDTV as product concepts. Nvidia showed off its ability to display games in 3D and several other smaller companies demonstrated various 3D technologies, some with polarized glasses, some with shutter-glasses. I liked Sony’s demonstrations the best because they used lightweight polarized glasses.
At the 2010 CES, Sony and Panasonic and other manufacturers demonstrated 3D television products that will ship later this year. Actually, any television with a refresh rate of 120hz or greater is ‘3D ready.’ You’ll still need synced shutter glasses and a 3D source, but the screen will be able to display both eye-images at a fast enough rate to avoid jitter.
At the 2011 and probably 2012 Consumer Electronics Shows, we’ll start seeing second-generation and third-generation 3D products, by which time the technology will have matured, the prices will have dropped, and we will have settled into a standard for 3D HDTV.
But some industry pundits have already weighed in, suggesting that 3D is a fad, isn’t something that consumers really want, and doesn’t lend itself to home viewing—particularly because the ‘goofy glasses’ are a hindrance. Plus the 3D sets are expensive, most consumers haven’t finished paying for their current HDTV sets, so why would they want to replace them this year?
The first stereoptic movies were shown in theaters in 1922 and used red and blue (anaglyph) glasses. The first public demonstration of the Polaroid projection of 3D movies was at the 1939 World’s Fair in New York in a promotional film for Chrysler.
In 1946, 90 million people a week went to the movies. Only a few years later, television had cut those attendance numbers almost in half. The studios were looking for ways to compete with this upstart industry. (Sound familiar?)
The first thing the studios did was to increase the number of Technicolor productions, because television was only black-and-white. They also began experimenting with various big screen processes. Cinerama had a wraparound screen and needed three cameras and three projectors. VistaVision used 70mm film at 30fps. Cinemascope used 35mm film projected through an anamorphic lens that stretched it sideways to fill a wide curved screen.
But in 1952 an independent producer named Arch Oboler brought Bwana Devil to the theaters. It was a pretty dreadful movie, telling the story of two lions that killed 130 people during the construction of an African railroad, but the novelty of 3D drew large audiences to the theaters and the major studios were quick to leap aboard.
Here are some of the arguments against overclocking: “It voids the warranty. It stresses the system components beyond their specifications, sometimes to the point of premature death. It requires additional expenditures of power and cooling—and if you screw it up, you can fry your processor.”
And here is the biggest case for overclocking: “It makes my computer run faster.”
Both of those positions are valid. And most folks who have experience in overclocking are well aware of the ones and the zeroes in the equation. But neither of those assertions is compelling enough to end the argument one way or the other—because both of those positions fall short of the real issue.
Thirty-five years ago, Douglas Trumbull, the special effects wizard who created marvelous spaceships for Stanley Kubrick’s 2001: A Space Odyssey and not-so marvelous spaceships for Gene Roddenberry’s Star Trek: The Motionless Picture also worked on a process called Showscan.
Showscan used 70mm film projected onto a screen that curved 150 degrees around the audience—exactly like the Cinerama screens of the 50’s and 60’s. The difference between Showscan and 70mm Cinerama was that Showscan was photographed and projected at 72 frames per second. The impact on viewers was profound. The image was so clear it looked three-dimensional. All film grain disappeared. If there was dirt on the film, it wasn’t on the screen long enough to register. All that was left was the image. Even better, fast moving objects didn’t flicker, didn’t blur, didn’t shudder—they just moved smoothly. The inevitable roller-coaster demonstration was visceral.
Trumbull tested the Showscan process in front of several live audiences; he also had researchers from UCLA come in and measure the physical reactions of Showscan viewers.
What they found directly influenced the direction of home theater technology.
Before it was “personal computing,” it was “the computer revolution.” And before it was “the computer revolution,” it was “micro-computing.” And before that, it wasn’t anything except a few nerds tinkering with possibilities.
The first micro-computer was the MITS Altair 8800. Popular Science magazine put it on the front cover in January of 1975 and kick-started everything. By May, MITS had sold and shipped over 2500 kits. (That’s right, you had to be a power-user and build it yourself!) Shortly after that, IMSAI introduced the IMSAI 8080 and it became the fastest selling micro-computer in the world.
Both machines were built on the S-100 bus and ran the 8-bit 8080 chip at a whopping 2MHz. The IMSAI 8080 shipped an astonishing 4K of RAM. (That number is still astonishing today. Can you actually do anything in 4K? The demo scene says yes.) It was a beautiful blue box with eight red and blue switches across the front. You set those switches up or down to indicate zero or one. And you entered each byte of your program and your data that way. You were the operating system.
The Turing Test says that if you can’t tell if you’re exchanging texts with a machine or a human being, then the machine has achieved cognitive ability—it’s thinking.
But based on that definition, and based on the evidence of the comment sections of various websites, then more than half the people posting online are not thinking. (And that may be a generous statistic. You can Google Sturgeon’s Law for a less optimistic assessment.) Too many people are just running tapes—canned responses. Automatic reflexes are simple mechanical operations. Press a button, run a program. There’s no thinking involved, just processing.
Thinking is reasoning ability. We see it in dogs, dolphins, chimpanzees, children, and even the occasional congressman—but that level of reasoning ability occurs at a primal level, it’s simple and direct. The higher functions of what we call rationality and sentience demonstrate themselves in profoundly different ways, recognizable but not easily definable.
Intelligence is generally able to recognize intelligence in action—and that may be one of the defining qualities of intelligence. Not every intelligent being can solve a Rubik’s cube or Fermat’s last theorem, but we can still recognize the intelligence at work in those solutions. The next step, actually designing and creating intelligence requires something else, call it meta-intelligence. We get to step back and think about thinking. We get to deconstruct thinking so we have a clear idea of what we want to build.
The term "artificial intelligence," however, is inaccurate.
Alan Turing should have been knighted. He should have been Sir Alan Turing. Instead he was prosecuted for being homosexual and committed suicide in despair. The British government conveniently forgot that Turing was the genius behind the Allies’ code-breaking efforts during WWII. The “Ultra Secret” is generally credited as the single most important advantage the Allied Forces had against the Axis powers, to the point that Eisenhower was sometimes reading Hitler’s mail even before Hitler.
Fifty-five years after Turing’s death, in response to an Internet campaign, the British government finally got around to acknowledging Alan Turing’s contributions and apologizing for its failure to honor him appropriately.
Sorry, guys, but an apology does not erase an egregious wrong.
Editor's Note: We're very pleased to welcome David Gerrold, an acclaimed and prolific science fiction writer, to Maximum PC as a regular columnist. David, best known for his numerous contributions to Star Trek and Star Trek: The Next Generation, will share his thoughts on topics including the influence of science fiction on technology, the develop of tech trends, and notable technologists.
I try not to tell people I write science fiction. Too often, that turns into a conversation I don’t want to have: “Dude, it’s already ten past 2000. Where’s my flying car? Where’s my jetpack? Where’s my Lunar colony?”
This is "The Y2K Meme," the idea that the future was supposed to start in the year 2000 and we forgot to build it. And of course, because science fiction writers (allegedly) predicted all these glorious futures, it’s our responsibility to explain why it didn’t happen.
This meme began at least a century ago. The father of modern science fiction, Hugo Gernsback, made specific predictions about the future, everything from motorized roller skates to night baseball. Within a short time, many science fiction writers were functioning as futurists, telling tales of fabulous technologies to come.