I went to check the thread I linked in the post below and it seems that things continue to heat up as Markus “Raging Bull” Heinsohn expresses his displeasure with the (I am assuming) rhetorical question:
“So if I code tic-tac-toe I get a score of 100%? Nice”
No you don’t. But if you code tic-tac-toe, add a bunch of features that don’t work very well or have awkward interfaces, you get a 47%. And people will be asking why the hell Heinsohn messed up the perfectly nice game of tic-tac-toe.
This is what I like to call the .400 Software Studios effect. When they were around, they made some very, very nice games…eventually. The release code was horrendous (possible exception being their pro basketball titles), but they were patched up over weeks and months.
Should reviewers give these types of games high scores based on what they could become, or on what they currently are? I am probably in the camp that wants to hold publishers’ feet to the fire and hold them accountable for what they release to the public. There’s too much of this paying beta testing going on and I’m a bit tired of it.
On the other hand, you don’t want to kill a game if there’s hope that the developer will fix it.
Of all the reviewers on the planet, Brett Todd’s opinions probably most closely match my views on most text games. He has a long resume in this genre. He was my sports editor at Games Domain Review (originally a UK site by the way) many years ago and knows Championship Manager, Diamond Mind, and all that good old text gaming stuff. He knows what he is talking about, more than any text-gaming reviewer out there as far as I am concerned.
I am not sure if he actually came up with the 47% score himself, but I certainly understand why it got that score. Maybe he and many others are getting tired of the .400 Software Studios effect.
The good news? Fix the game and OOTPB 2007 will get great scores next year!