I’ve been working with a group creating the rubrics for evaluating submissions in a 2nd Life serious game competition. It’s an interesting issue, as there’re broad variances in what folks are thinking. As a reaction to a draft consensus of opinion, I rewrote the criteria to be evaluated as:
Comprehensiveness of alternatives to right answer
Match of game decisions to learning objectives
Appropriateness of feedback
Appropriate interface match to action
Naturalness of feedback mechanism
Continuity of experience
Seamlessness in embedding decisions into game world
Appropriateness of world to audience
Relevant to irrelevant action ratio
Appropriate challenge balancing
Level of replay (linear, branching, engine-driven)
I know this can be done better. Your thoughts?
It’s an effort to combine my aligned elements from both education and engagement (the theoretical basis for my book on learning game design): clear goals, balanced challenge, thematic context, meaningfulness of action to story, meaningfulness of story to player, active choice, direct manipulation, integrated feedback, and novelty (see below), with the more standard elements necessary to make a successful online experience.
I find it useful to revisit principles from another angle, as it gives me a fresh chance to put a reality-check on my thinking. I think my older model holds up (and has continued to over the years), and the extras are not unique to learning games. Some elements cross boundaries, such as feedback having to components: one being the relation to the learning, and the other to the action.
The principles state that, done properly, the best practice (next to mentored real performance) ought to be games. Or, as I like to say: “Learning can, and should, be hard fun!”