My opinion anyway...
I am so glad you guys brought this up and so pissed I don't have time to finish a project I am working on.
I have a algorythym I have been working on that is modified from a product review package for software.
The output score is on a 1-100 scale, but you can't simply lok at the score and say "oh - that one got a 52 - it must suck".
You must compare it to other products of it same class.
My example run was done with OS's. Linux, OSX and XP.
I think they all scored in the 70's. Linux got a good boost for being free.
The algorythm doesn't care if the product is worth the price. It simply deducts points for certain price ranges. There is a subjective section - but basically, if you charge for the product you cannot get 100 points. Why is this fair? Because its designed for scores to be compared to competative products. If a product is free, there will likely be things about it that lower its scores as well.
So what does a score of 78 means when your competition is at 76? It means you have the better product as tested. It also means you have room to improve. And, your competitor is right on your ass.
This method (OK - a similar one) is used in product consulting. The raw score is not an indication of if you are a "10", but one of room to move with the product and a scale you may rate competitors by.
Think about it like taking the MPC score and only letting it count for 10 points - then the other 90 are based on price, footprint, flexability, compatability, stability (as tested), etc... (there are like 20 or so things IIRC)