After 119 monthly issues and roughly nine and a half years (3,474 days, according to Wolfram Alpha), this is my last issue as Editor-in-Chief of Maximum PC. I’d like to pretend it’s been grueling work—from the crazy costumes to our intern-torturing escapades to the great smoke alarm incident of 2000—but, I can assure you, it’s been a blast.
Most importantly, I’ve enjoyed working for you guys over the last decade—a decade that’s been chock-full of amazing technological triumphs, as we accelerate ever faster toward the Singularity. To give a little perspective to that decade, here are the four achievements (presented in no particular order) that I think have made the biggest impact on the world, during the time I’ve been at Maximum PC.
What a year for Google! Though I suppose one could really say that almost any year. Not to sound like a wide-eyed admirer or uninformed fanboy, but it seems as if Google always has something grand up its sleeve. But instead of waxing nostalgic about all of "The Goog's" fancy Web-based services or search refinements or what-have-you, I think it's important to note just how dramatically Google has made its mark on the open-source world in 2009.
Yes, I'm talking about Chrome. Or Android. Or Chrome-Android. You know, those two independent-but-not-really operating systems that are different yet similar enough to warrant Google splitting them with a wink-and-a-nod that they'll likely be combined at some grand point in the future.
I'll simplify. Android is the mobile version of Google's open-source OS. Chrome is the desktop/laptop/netbook/who-knows version. Sort-of. Android is in the process of spilling over to tablets and has already made the jump to netbooks. Chrome is currently under-wraps at Google, save for the open-source variant Chromium OS which is free for the taking, building, and installing.
Confused? I wouldn't be surprised. For all the intelligence packed into the dark recesses of Google's worldwide campuses, the company doesn't have a walk-in-the-park path to victory in the mobile, desktop, or laptop markets with its bevy of open-source operating systems. I've identified five points that could turn Google's fortune--and you'll find these after the jump!
I don't often connect to the Apple's iPhone App Store. It's not that I don't like perusing through new and interesting applications or games to try out. Rather, it's because Apple has made the processes of purchasing new applications so impossible to handle that it's simply not worth my time to scan through the listings to find new things to try out.
In fact, you could make a solid argument that there are no "real" listings of applications and games in the App Store. As to why that's the case, one need look no further than Apple's stranglehold on its own platform--were there a clarion call for a more open experience in application management, it would be require Steve Jobs to sit on the receiving end of one of those giant horns from the Ricola commercials.
We've been down this road before, however. I only readdress the issue because of all the unrestrained hype surrounding Intel's development of its own App Store for netbooks. Given the success of the Apple model--three parts promotion, one part consumer restraint--I can see no reason why Intel wouldn't follow suit.
The Turing Test says that if you can’t tell if you’re exchanging texts with a machine or a human being, then the machine has achieved cognitive ability—it’s thinking.
But based on that definition, and based on the evidence of the comment sections of various websites, then more than half the people posting online are not thinking. (And that may be a generous statistic. You can Google Sturgeon’s Law for a less optimistic assessment.) Too many people are just running tapes—canned responses. Automatic reflexes are simple mechanical operations. Press a button, run a program. There’s no thinking involved, just processing.
Thinking is reasoning ability. We see it in dogs, dolphins, chimpanzees, children, and even the occasional congressman—but that level of reasoning ability occurs at a primal level, it’s simple and direct. The higher functions of what we call rationality and sentience demonstrate themselves in profoundly different ways, recognizable but not easily definable.
Intelligence is generally able to recognize intelligence in action—and that may be one of the defining qualities of intelligence. Not every intelligent being can solve a Rubik’s cube or Fermat’s last theorem, but we can still recognize the intelligence at work in those solutions. The next step, actually designing and creating intelligence requires something else, call it meta-intelligence. We get to step back and think about thinking. We get to deconstruct thinking so we have a clear idea of what we want to build.
The term "artificial intelligence," however, is inaccurate.
The recession is getting so bad that stock market refugees are snapping up Treasury bills at 0.2 percent interest, and car dealers have tried everything but adding immortality to their option packages. So you would think that a hot-selling product would be universally welcomed.
Netbook computers are a rare bright spot in a dimming economy. They’re selling faster than copies of Foreclosure for Dummies. The Asus Eee PC opened the door. Now there are too many to count.
However, critics say netbooks might be a bad thing. Their reasoning is that most netbooks use Intel’s Atom processor, which costs less and has lower profit margins than Intel’s other mobile processors. Atom’s popularity, they say, might actually hurt Intel and drag down profits for system vendors and their suppliers.
Ignoring the absolutely, hilariously awful second movie, the universe of Vin Diesel vehicle Richard B. Riddick is undeniably fascinating. Each of its good entries dishes out only as much juicy info as Riddick and a small cast of supporting characters see fit, creating a potentially infinite playground for Diesel’s be-goggled antihero to bully around. And, as with any well-constructed sci-fi setting, no trip to Riddick’s take on the final frontier is complete without a liberal helping of the four W’s. What’s the deal with this planet? Why is Riddick performing fistic genocide on half of its population? Who made these totally rad mechs? And where can I get one?
The answer to all of these questions is simple in Chronicles of Riddick: Assault on Dark Athena -- explore.
Or at least, that’s the logical solution, and in a universe where even a quick moment of hesitation is liable to end with someone on the receiving end of a knife to the eye socket, it’s probably best to avoid asking too many questions. So, during my still in-progress playthrough, I’ve been plumbing the grimy depths of Alcatraz’s out-of-this-world cousin, Butcher Bay. Unfortunately, as of now, the only reward I’ve received for all my exploration is a pack of smokes. And by “a pack,” I mean somewhere in the upper double digits. Suffice it to say, it’s a good thing Riddick doesn’t use the same cigarette storage methods as Solid Snake.
But for me, this literal smoke stack still presents a problem. Sure, I’m being rewarded for my constant exploration, and yeah, the Special Surprises inside each carton – ranging from concept art to behind-the-scenes tech demos – are pretty neat, but after a while, everything just becomes so predictable. Under those crates? A cigarette carton. On that ledge? A cigarette carton. Behind your ear? Well, you get the idea. And really, isn’t the main appeal of exploration – and, to an extent, gaming in general – discovery and subsequent mastery of the unknown? Why take a hike off the beaten path when I already know what lies just around the corner – especially when, in all likelihood, said main path will provide me with far more varied rewards for my trouble?
Nvidia stands at a crossroads, with two closed, proprietary APIs that have mainstream potential: the general-purpose computing CUDA API, and the PhysX physics-acceleration API, which sits on top of CUDA. These are both promising technologies, but only owners of Nvidia hardware can harness their power. Meanwhile, there are two emerging open standards that mirror what Nvidia is doing with its proprietary development. One is OpenCL 1.0, and the other is a general-purpose GPU computing API, which Microsoft will include in DirectX 11. There are a relatively small number of consumer applications that use CUDA, PhysX, or OpenCL right now, but the possible applications for the tech are endless—grossly simplified, these APIs let graphics chips perform CPU-like functions.
The question Nvidia needs to be asking is simple: Will developers write their general-purpose GPU computing apps using a proprietary API that works on only a subset of PCs—those stuffed with Nvidia hardware—or will they use an open API that will work on every PC on the market?
I just returned from a special theater screening of War Games—quite possibly the only good film Hollywood has ever produced about computers, computer nerds, or hacker culture. Shockingly, the movie, which was first released in 1983, holds up quite well, despite the use of archaic hardware (acoustic couplers and vocoder boxes), a laughable sentient military supercomputer, and an occasional lapse into typical Hollywood lingo.
The abundance of 8-inch floppy discs also gave people in the theater a laugh, as did the fact that characters were practically chain-smoking throughout the entire movie. But none of the showing’s pervasive air of yestertech could take away from the fact that War Games remains awesome.