I spend a lot of my time thinking about kind of goofy stuff. For example, this article about Rolling Stone re-verdicting many albums over the years got me thinking about verdict scales. I'm reasonably happy with the Maximum PC scale, our average and median scores skew a little high, but that's partly because we tend to review stuff that we think readers are going to be interested in, since we have limited time and pages in the mag. The thing that I don't like is that there's a fairly wide swath of no-man's-land between a 6 verdict and the dreaded deuce that we rarely utilize.
To give you a point of reference, a verdict of 6 represents a product that very few people should spend their money on. The product performs only a limited function, and you'd be interested in purchasing it only if you have need of that one function. We reserve the 2 verdict for products that don't work, that don't have any hope of working, and that you wouldn't want even if they did work. (We also occasionally give 1 verdicts, exclusively to products that have all the qualities of a 2 but also do actual physical harm to the tester or his computer.)
But, I wonder, do we actually need this level of granularity for PC hardware reviews? I'm starting to think that 11 different verdicts is too many for PC hardware. (I'm not even going to talk about the crazy 100 point scale that PC Gamer uses.) I wonder if we could actually mimic the scale that stock analysts use, with five gradations--Strong Buy, Buy, Neutral, Don't Buy, Strong Don't Buy (we'd come up with sassier names for the magazine, never fear). I think that using a less complex scale would allow us to actually use the entire thing, with products more evenly distributed across a typical bell curve. Strong Don't Buy would be reserved for products that don't actually work, while Strong Buy would be reserved for products that would currently be worthy of a Kick Ass award.
I would never suggest ditching verdicts altogether. CGW bravely tried that last year, and they switched back to numerical verdicts within a few months. I know lots of people use verdicts as a substitute for reading a review, which I find overly trusty. When I'm reading verdicted reviews, I use the score partially to determine whether I should read the actual review. I'm much more likely to read an extremely positive or extremely negative review than I am to read a middling review, especially if I'm not interested in making a purchase and am only reading for entertainment. If I'm shelling out my hard-earned cash, then I don't just read the reviews, I actually study them.
What do you guys think? Let me know, by posting in the comments below.
[Addition 5/23/07] For the record, I wouldn't suggest changing to a 1-5 scale. I think that the typcial five-star scale has the same problems our current verdict system has. My suggestion is to actually make the verdicts Strong Don't Buy, Don't Buy, Neutral, Buy, and Strong Buy.