We had quite the heated discussion today on a project I’m working on, and one of the emergent issues was whether ‘the expert’ dictates the objectives, or whether the developer could change them. I recognized that this is not only an issue in our process going forward (read: scalability), but it’s also a larger issue.
In this case, the design that was presented by the developer to the expert (this is a simplification, our team process is more complicated than this :) ) didn’t match the expert’s expectation. (This was an artifact of a bad choice of language at the beginning that confounded the issue.) However, the expert expected to present the objectives, and the game would be designed to achieve that objective. Which I would agree with, but with one caveat.
My caveat is two-fold. First, experts aren’t necessarily masters of learning. Second, they may not actually have access to the necessary objectives: expertise is ‘compiled’ and experts don’t necessarily know how they do what they do! (An outcome of cognitive science research, it’s something I talk about in my ‘deeper elearning’ talk and also my white paper on the topic, .pdf) In this case the experts will be instructors on the topic, so presumably they’re both aware of content and learning design, but we all know courses can be too much knowledge, not enough skill.
Now, as Sid Meier said, “a good game is a series of interesting decisions”, and my extension is that good learning practice is a series of important decisions. I claim that you can’t give me a learning objective I can’t make a game for, but I reserve the right to move the objective high enough (in a learning taxonomy sense). Similarly, I can see that an expert might bring in an objective that’s not appropriate for any number of reasons: too low a level, not something individuals would really have difficulty with, or not important in the coming years, and the developer might not recognize it as wrong from the point of view of domain expertise, but when mapping a game mechanic onto it would realize it’s wrong because it’s an uninteresting task (or they’re more closely tied to the audience, often being younger, more tech-savvy, etc).
So, I believe (and it’s been my experience) that there’s of necessity a dialog between the source of the domain knowledge, be it expert, professor, whatever, and the designer/developer/whatever. When it comes to objectives, once the expert understands the developer’s point, they do get the final say on the necessary task & skills, but they need to be open to the developer’s feedback and willing to work with them to produce a design that’s both effective and engaging. My book is all about why that’s a doable goal and how to, but in short the elements that make learning practice effective align perfectly with the elements that make an engaging interactive experience (and so say many authors, including Gee, Prensky, Aldrich, Johnson, Shaffer, the list goes on).
Similarly, the developer has to design the game experience around the objective, and while the expert may provide feedback about aesthetic preferences or information helping to establish the audience, at the end the developer has final say on the engagement. With good intentions all around, this will work (with bad intentions, it won’t work regardless :).
Which is, of course, where the team ended up, after an hour of raised voices and frustration. All’s well that ends well, I reckon. Are your experiences or expectations different?

