Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

15 February 2009

Monday Broken ID Series: Concept Presentation

Clark @ 12:07 pm

Previous Series Post | Next Series Post

This is one in a series of thoughts on some broken areas of ID that I’m posting for Mondays.  The intention is to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer’, but instead to point out how to do better design.

At some point (typically, after the introduction) we need to present the concept.  The concept is the key to the learning, really.  While we’ve derived our ultimate alignment from the performance objective, the concept provides the underlying framework to guide one’s performance.  We use the framework to provide feedback to help the learner understand why their behavior was wrong, both in the learning experience and ideally past the learning experience the learner uses the model to continue to develop their performance.  Except that, too often, we don’t provide the concept in a useful way.

What we too often see is a presentation of a rote procedure, without the underlying justification.  In business, we’ll teach a process.  In software, we’ll see feature/function presentations (literally going item by item through the menus!).  We’ll see tutorials to achieve a particular goal without presenting an underlying model.  And that’s broken.

We need models! The reason why is that people create mental models to explain the world.  People aren’t very good at remembering rote things (our brains are really good at pattern matching, but not rote memorization).  We can fake it, but it’s just crazy to have people memorize rote things unless it’s something we have to absolutely know cold (medical terminology is an example, as are emergency checklists for flights).  By and large, very little of what we need to know needs to be memorized.

Instead, what people need are models.  Models are powerful, because they have explanatory and predictive power.  If you forget a step in a procedure, but know the model driving the performance, you can regenerate the missing step.  With software, for instance, if you present the model, and several examples where the way to do something is derived from the model, and then you have the learner use inferences from the model to do a couple of tasks, you might be saved from having to present the whole system.

People will build models, so if you don’t give them one, it’s quite likely that the one they do build will be wrong.  And bad models are very hard to extinguish, because we patch them rather than replace them.  It requires more responsibility on the designer to get the model, as, for reasons mentioned before, our SMEs may not be able to help us, but get them we must.  Realize that every procedure, software, or behavior has a model that drives the reason why it should be done in a particular way, and find it. Then we need to communicate it.

Multiple models help! To communicate a model most effectively, we should communicate it in several ways.  Models are more memorable than rote material, but we need to facilitate internalization.  Prose is certainly one tool we can and should use (carefully, it’s way too easy to overwrite), but we should look at other ways to communicate it as well.

Multiple representations help in several ways.  First, they increase the likelihood that a learner will comprehend the model, and then have a path to comprehend the other representations.  Second, the multiple representations increase the number of paths to activate a model in a relevant context.  Finally, multiple representations increase the likelihood that one can map closely to the problem and facilitate a solution.

Multiple representations are, unfortunately, sometimes difficult to generate (more so than finding the original model).  However, we should always be able to at least generate a diagram.  This is because the model should have conceptual relationships, and these can be mapped to spatial relationships.  There’s some creativity involved, but that’s the fun part anyways!

Yes, doing good instructional design does take more work, but anything worth doing is worth doing well.  On a related, but important, note, unfortunately the difference between broken ID and good ID is subtle.   You may have to explain it (I have literally had to), but if you know what you’re doing and why, you should be able to.  And having developed a powerful representation increases the power, and success of the learning, and consequently the performance.  Which is, of course, our goal. So, go forth and conceptualize!

9 Comments »

  1. Hi, Ive been following your Monday series and this ones pretty intresting. Would you be able to provide an example of a model that has worked in the past. It could be an example of a software application training based on simulations (its one of my current projects).

    Comment by Sameer — 16 February 2009 @ 1:33 am

  2. Sameer, glad you’re finding it interesting! Unfortunately I haven’t been able to *do* one for software (dependent on client needs), but here’s one I would’ve done.

    I was trying to learn to use FreeHand (had a free copy), and the tutorial had me build a picture with a cone and a sphere. The problem was, I wanted to use it to build the Quinnovation logo, which splits across the shared letters (inn). What the tutorial didn’t explain, and should’ve, is that the primitive element of Freehand is a path, and that those shapes (cone/sphere) were actually made of paths. I eventually found out that I could even convert fonts to paths, and then slice it with the ‘knife’ tool, and voila’, I got my logo (and the concept). So, I would’ve created the model that objects are created out of paths, and can be dis/aggregated, and that conglomerations are useful compounds, etc.

    I may be doing this in the near future, as we assist some software vendors to tighten up their training.

    Does that help?

    Comment by Clark — 16 February 2009 @ 11:31 am

  3. Thanks, Clark, for another great broken ID posting. What you’ve described has been one of my biggest pet peeves with many of early design documents I see from fellow IDs. They can nail the detail of the process or procedure, but how it fits into the larger performance picture is generally missing. This is especially troubling within new hire orientation projects.

    If you’ll permit me, I think I have an example for Sameer that illustrates your comments.

    We were asked to develop software training for a new support application that our repair technicians would use. The techs were in customer’s homes servicing home appliances and electronics. The software managed the technician’s day (providing a route of customers they would meet that day) and provided visibility to each service order (history of that particular product, warranty information, etc.). Each service order ended with payment of some sort – either internal because the repair was covered, or actual payment by the customer.

    As Clark described, we could have easily designed this training to be an item by item description of each feature/function in the software. But that would have decontextualized for the associate when or why to do certain things.

    Instead, the model we used was the technician’s day – linking software function to external triggers. When I wake up I need to do “X”. When I get in the van I need to do “Y”. The reason to use the software existed in the larger context of going about my day. As such, there were external influences that dictated when to do things, as well as external processes that the tech’s use of the software would trigger. Without this broader appreciation the techs would have little understanding of the impact their use (or misuse) of the software was having.

    In addition, as Clark mentions about multiple representations, the training program create this awareness in different ways. Some early scenario’s simply started with textual narratives of what the tech should be doing with the software. Other scenarios brought in first-person interaction, where the tech would need to do things based on video of managers or customers that should provide the tech with external triggers. Finally, some scenarios relied on a third-person perspective – having the tech observe another tech’s actions and determine whether the actions were correct or not, then justify their responses.

    This elaborate model helped provide real world context of how the software fit into the tech’s daily life, and also provided a greater appreciation for what happened ‘down stream’ based on their actions.

    Hope that helps, Sameer. Clark, please feel free to tweak my illustration if I may be misrepresenting your ideas around modeling.

    Comment by John Schulz — 16 February 2009 @ 11:41 am

  4. Great post, Clark. I completely agree. Context and helping relate training to the real world are definitely areas that can get ignored in training, especially software training. One thing I do, even in short demonstrations, is to provide a high-level overview and walk through a scenario driven example. Learners want to know the “when” and “why” around procedures. Feedback I’ve gotten has specifically asked for examples with a walk-through. Learners know the example won’t translate exactly to their specific situation, but they find it valuable to see how the steps fit into the big picture.

    Comment by Gary H — 16 February 2009 @ 3:18 pm

  5. This is great stuff and thanks a lot John for your thoughts. I have a similar example to share and probably I might have unknowingly made use of a Model that Clark talks about.

    In the past we have created software training for a handheld device that was also connected to the larger enterprise system back on the office floor. There were different roles that interacted with this software, right from on field sales people to the managers on the floor. In this case we first identified a path for learning which took into account the common components of the software, followed by role based components. In every section, we used scenario/problem based learning by picking up contextual situations of everyday business. for e.x we identified life of a sales person before and after introducing the software. We listed down X activities that he/she would now do using the software. Every activity was explained through a real life scenario that demonstrated the keystroke steps using a voice over.

    We personalized the whole experience by introducing agents who would perform the tasks. So if field staff had to check on inventory before accepting an order in front of the customer, they could do that using their handheld device. To teach this we created a pseudo scenario where a customer places an order and oriented the learner on the steps accordingly. Obviously this would have been impossible without the dummy data provided by the SME’s.

    The point is, rather than having tutorials, that explain what each functionality can do It is much more motivating for the leaner to see how functionality works in everyday life. The “WIIFM”
    quotient is better in such cases.

    Comment by Sameer — 16 February 2009 @ 10:42 pm

  6. Hi Clark! Thanks for the great post.

    I have been thinking about the models in a different context. I considered the models as schema with the basic principles and concepts as the building blocks of the schema. For example, the principle of using a path and the concept of path itself would have been the elements to the schema/model of understanding in the Freehand case.

    My key challenge is creating schema/models for many of the learning items I create. Sometimes I feel that a particular schema needs as sub-schema, which may mutate into a parallel schema/module altogther. I think I have a problem with multiple models, and very likely it is my inability to process multiple schema/models in a given context! :) Thanks!

    Comment by Dip — 17 February 2009 @ 3:19 am

  7. What great conversation, thanks! I find it very interesting how you talk about making relevant examples and practice, and I fully agree (you’ll see in the next couple of Mondays). I want to be clear here that the particular issue is having the conceptual model underpinning how to perform in scenario, presenting that explicitly (as Dip suggests, the schema).

    While modelling the behavior of using the system is important contextualization, I want an underlying conceptual model of the tech software, the handheld system, etc.

    Comment by Clark — 17 February 2009 @ 12:54 pm

  8. Good stuff, Clark. I think there are some common themes emerging in the industry. We may be able to save ourselves yet:) The concept focus is something I’ve been attempting project as a priority principle with limited success until recently.

    I believe this is similar to what you mean. I’ve been drawing this simplified diagram to illustrate the concept foundation for awhile. I read and assimilate so much stuff on a daily basis I couldn’t tell you if it’s original or not. So if it’s original I attribute it to me – if not, it’s someone else’s:)

    http://www.xpconcept.com/conceptRelationship.jpg

    Comment by sflowers — 23 February 2009 @ 2:45 pm

  9. I’m a tron chaser by training. So block diagrams to describe function and relationship are bread and butter. Years ago I did a bit of work for a software company. I suggested abstracting functions into some diagrams and activities that showed the conceptual relationships between states of the application. I don’t remember a warm reception to the approach.

    I think that relational concept mapping can have a tremendous clarifying effect. I would bet that this visualization may also aid in task recall.

    Comment by sflowers — 23 February 2009 @ 2:49 pm

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress