Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Designing for an uncertain world

17 April 2010 by Clark 9 Comments

My problem with the formal models of instructional design (e.g. ADDIE for process), is that most are based upon a flawed premise.   The premise is that the world is predictable and understandable, so that we can capture the ‘right’ behavior and train it.   Which, I think, is a naive assumption, at least in this day and age.   So why do I think so, and what do I think we can (and should) do about it?   (Note: I let my argument lead where it must, and find I go quite beyond my intended suggestion of a broader learning design.   Fair warning!)

The world is inherently chaotic. At a finite granularity, it is reasonably predictable, but overall it’s chaotic. Dave Snowden’s Cynefin model, recommending various approaches depending on the relative complexity of the situation, provides a top-level strategy for action, but doesn’t provide predictions about how to support learning, and I think we need more.   However, most of our design models are predicated on knowing what we need people to do, and developing learning to deliver that capability.   Which is wrong; if we can define it at that fine a granularity, we bloody well ought to automate it.   Why have people do rote things?

It’s a bad idea to have people do rote things, because they don’t, can’t do them well.   It’s in the nature of our cognitive architecture to have some randomness.   And it’s beneath us to be trained to do something repetitive, to do something that doesn’t respect and take advantage of the great capacity of our brains.   Instead, we should be doing pattern-matching and decision-making.   Now, there are levels of this, and we should match the performer to the task, but as I heard Barry Schwartz eloquently say recently, even the most mundane seeming jobs require some real decision making, and in many cases that’s not within the purview of   training.

And, top-down rigid structures with one person doing the thinking for many will no longer work.   Businesses increasingly complexify things but that eventually fails, as Clay Shirky has noted, and   adaptive approaches are likely to be more fruitful, as Harold Jarche has pointed out.   People are going to be far better equipped to deal with unpredictable change if they have internalized a set of organizational values and a powerful set of models to apply than by any possible amount of rote training.

Now think about learning design.   Starting with the objectives, the notion of Mager, where you define the context and performance, is getting more difficult.   Increasingly you have more complicated nuances that you can’t anticipate.   Our products and services are more complex, and yet we need a more seamless execution.   For example trying to debug problems between hardware device and network service provider, and if you’re trying to provide a total customer experience, the old “it’s the other guy’s fault” just isn’t going to cut it.   Yes, we could make our objectives higher and higher, e.g. “recognize and solve the customer’s problem in a contextually appropriate way”, but I think we’re getting out of the realms of training.

We are seeing richer design models. Van Merrienboer’s 4 Component ID, for instance, breaks learning up into the knowledge we need, and the complex problems we need to apply that knowledge to.   David Metcalf talks about learning theory mashups as ways to incorporate new technologies, which is, at least, a good interim step and possibly the necessary approach. Still, I’m looking for something deeper.   I want to find a curriculum that focuses on dealing with ambiguity, helping us bring models and an iterative and collaborative approach.   A pedagogy that looks at slow development over time and rich and engaging experience.   And a design process that recognizes how we use tools and work with others in the world as a part of a larger vision of cognition, problem-solving, and design.

We have to look at the entire performance ecosystem as the context, including the technology affordances, learning culture, organizational goals, and the immediate context.   We have to look at the learner, not stopping at their knowledge and experience, but also including their passions, who they can connect to, their current context (including technology, location, current activity), and goals.   And then we need to find a way to suggest, as Wayne Hodgins would have it, the right stuff, e.g. the right content or capability, at the right time, in the right way, …

An appropriate approach has to integrate theories as disparate as distributed cognition, the appropriateness of spaced practice, minimalism, and more.   We probably need to start iteratively, with the long term development of learning, and similarly opportunistic performance support, and then see how we intermingle those together.

Overall, however, this is how we go beyond intervention to augmentation.   Clive Thompson, in a recent Wired column, draws from a recent “man+computer” chess competition to conclude “serious cognitive advantages accrue to those who are best at thinking alongside machines”.   We can accessorize our brains, but I’m wanting to look at the other side, how can we systematically support people to be effectively supported by machines?   That’s a different twist on technology support for performance, and one that requires thinking about what the technology can do, but also how we develop people to be able to take advantage.   A mutual accommodation will happen, but just as with learning to learn, we shouldn’t assume ‘ability to perform with technology augmentation’.   We need to design the technology/human system to work together, and develop both so that the overall system is equipped to work in an uncertain world.

I realize I’ve gone quite beyond just instructional design.   At this point, I don’t even have a label for what I’m talking about, but I do think that the argument that has emerged (admittedly, flowing out from somewhere that wasn’t consciously accessible until it appeared on the page!) is food for thought.   I welcome your reactions, as I contemplate mine.

The GPS and EPSS

20 March 2010 by Clark Leave a Comment

It’s not unknown for me to enter my name into a drawing for something, if I don’t mind what they’re doing with it.   It’s almost unknown, however, for me to actually win, but that’s actually the case a month or so ago when I put a comment on a blog prior to the MacWorld show, and won a copy of Navigon turn-by-turn navigation software for my iPhone.   I’d thought a dedicated one might be better, though I’d have to carry two devices, but if I moved from an iPhone to Droid or Pre I’d suffer. But for free…

When I used to travel more (and that’s starting again), I’ve usually managed to get by with Google Maps: put in my desired location (so glad they finally put copy/paste in, such a no-brainer rather than have to write it elsewhere and type it on, or remember, usually imperfectly).   In general, maps are a great cognitive augment, a tool we’ve developed to be very useful.   And I’m pretty good with directions (thankfully), so when a trip went awry it wasn’t too bad.   (Though upper New Jersey…well, it can get scary.)   Still, I’d been thinking seriously about getting a GPS, and then I won one!

And I’m happy to report that Navigon is pretty darn cool.   At first the audio was too faint, but then I found out that upping the iPod volume (?) worked.   (And then it didn’t the last time, at all, with no explanation I can find.   Wish it used the darn volume buttons. We’ll see next time. ) However, it does a fabulous job of displaying where you are, what’s coming up, and recalculating if you’ve made a mistake.   It’s a battery hog, keeping the device on all the time, but that’s why we have charging holders (which I’d already acquired for long trips and music).   It also takes up memory, keeping the maps onboard the device (handy if you’re in an area with bad network coverage), but that’s not a problem for me.

However, my point here is not to extol the virtues of a GPS, but instead to use them as a model for some optimum performance support, as an EPSS (Electronic Performance Support System).   There’s a problem with maps in a real-time performance situation. This goes back to my contention that the major role of mlearning is accessorizing our brain.   Memorizing a map of a strange place is not something our brains do well.   We can point to the right address, and in familiar places choose between good roads, but the cognitive overhead is too high for a path of many turns in unfamiliar territories.   To augment the challenge, the task is ‘real time’, in that you’re driving and have to make decisions within a limited window of recognition.   Also, your attention has to be largely outside the vehicle, directed towards the environment. And to cap it all of, the conditions can be dark, and visibility obscured by inclement weather.   All told, navigation can be challenging.

While the optimal solution is a map-equipped partner sitting ‘shot-gun’, a GPS has been designed to be the next best thing (and in some ways superior).   It has the maps, knows the goal, and often more about certain peculiarities of the environment than a map-equipped but similarly novice partner.   A GPS also typically does not get it’s attention distracted when it should be navigating.   It can provide voice assistance while you’re driving, so you don’t need to look at the device when your attention needs to be on the road, but at safe moments it can display useful guidance about lanes to be in (and avoid) visually, without requiring much screen real estate.

And that’s a powerful model to generalize from: what is the task, what are our strengths and limitations, and what is the right distribution of task between device and individual?   What information can a device glean from the immediate and networked environment, from the user, and then provide the user, either onboard or networked?   How can it adapt to a changing state, and continue to guide performance?

Many years ago, Don Norman talked about how you could sit in pretty much any car and know how to drive it, since the interface had time to evolve to a standard.   The GPS has similarly evolved in capabilities to a useful standard.   However, the more we know about how our brains work, the more we can predetermine what sort of support is likely to be useful.   Which isn’t to say that we still won’t need to trial and refine, and use good principles of design across the board, interface, information architecture, minimalism, and more.   We can, and should, be thinking about meeting organizational performance, not just learning needs.   Memorizing maps isn’t necessarily going to be as useful as having a map, and knowing how to read it.   What is the right breakdown between human and tool in your world, for the individuals you want to perform to their best?   What’s their EPSS?

And on a personal note, it’s nice to have the mobile learning manuscript draft put to bed, and be able to get back into blogging and more. A touch of the flu has delayed my ability to think again, but now I’m ready to go.   And off I go to the Learning Solutions conference in Orlando, to talk mobile, deeper learning, and more.   The conference will both interfere with blogging and provide fodder as well.   If you’re there, please do say hello.

Writing and the 4C’s of Mobile

8 February 2010 by Clark 1 Comment

As I’ve mentioned before, I’m writing a book on mobile learning.   My only previous experience was writing Engaging Learning, where the prose practically exploded from my fingers. This time is different.

The prose actually does flow quite easily from my fingers,   but I find myself restructuring more often than last time.   This is a bigger topic, and I keep uncovering new ways to think about mobile and new facets to try to include.   As a consequence, as the deadline nears (!), I find myself more and more compelled to put all free time into the text.

There’s a consequence, and that is a decreasing frequency of blogging.   I’m coming up with some great ideas, but I’ve got to get them into the book, and I’m not finding time to rewrite them.

When I do have ideas in other areas (and I always do), I’m finding that they disappear under the pressure to meet my deadline. And there are ancillary details still to be taken care of (photos of devices, coordinating a few case studies).

Further, as neither blogging or the book (directly) pay the bills, I’ve still got to meet my client needs.   Also, I’m speaking at the Learning Solutions conference and involved in various ways with several others, and some deliverables are due soon. I’m feeling a tad stretched!

So, in many ways, this is an apology for the lack of blog posts, and the fact that it will likely to be sparse for another month and some.

As a brief recompense, I did want to communicate one framework that I’m finding helpful.   I’ll confess that it’s very similar to Low and O’Connell’s 4 R’s (for which I can’t find a link!?!; from my notes: Record, Recall, Reinterpret , Relate), but I can never remember them, which means they need a new alliteration.   Mine’s a bit simpler:

  • Content: the provision of media (e.g. documents, audio, video, etc) to the learner/performer
  • Compute: taking in data from the learner and processing it
  • Communicate: connecting learners/performers with others
  • Capture: taking in data from sensors including camera, GPS, etc, and saving for sharing or reflection

I find this one of several frameworks that support ‘thinking different’ about mobile capabilities.   I’ll be interested to hear your thoughts.

Is it all problem-solving?

12 January 2010 by Clark 5 Comments

I’ve been arguing for a while that we need to take a broader picture of learning, that the responsibility of learning units in the organization should be ensuring adequate infrastructure, skills, and culture for innovation, creativity, design, research, collaboration, etc, not just formal learning. As I look at those different components, however, I wonder if there’s an overarching, integrating viewpoint.

When people go looking for information, or colleagues, they have a problem to solve. It may be a known one with an effective solution, or it may be new. It doesn’t matter whether it’s a new service to create, a new product to design, a customer service problem, an existing bug, or what. It’s all really a situation where we need an answer and we don’t have one.

We’ll have some constraints, some information, but we’re going to have to research, hypothesize, experiment, etc. If it’s rote, we ought to have it automated, or we ought to have the solution in a performance support manner. Yes, there are times training is part of the solution. But this very much means that first, all our formal solutions (courses, job aids, etc) should be organized around problem-solving (which is another way of saying that we need the objectives to be organized around doing).

Once we go beyond that, it seems to me that there’s a plausible case to be made that all our informal learning also needs to be organized from a problem-solving perspective. What does that mean?

One of the things I know about problem-solving is that our thought processes are susceptible to certain traps that are an outcome of our cognitive architecture. Functional fixedness and set-effects are just two of the traps. Various techniques have evolved to overcome these, including problem re-representation, systematicity around brain-storming, support for thinking laterally, and more.

Should we be baking this into the infrastructure? We can’t neglect skills. Assuming that individuals are effective problem-solvers is a mistake. The benefits of instruction in problem-solving skills have been demonstrated. Are we teaching folks how to find and use data, how to design useful experiments and test solutions? Do folks know what sort of resources would be useful? Do they know how to ask for help, manage a problem-solving process, and deal with organizational issues as well as conceptual ones?

Finally, if you don’t have a culture that supports problem-solving, it’s unlikely to happen. You need an environment that tolerates experimentation (and associated failure), that support sharing and reflection, that rewards diverse participation and individual initiative, you’re not going to get the type of pro-active solutions you want.

This is still embryonic, but I’m inclined to believe that there are some benefits from pushing this approach a bit. What say you?

Creating meaningful experiences

8 December 2009 by Clark 7 Comments

What if the learner’s experience was ‘hard fun’: challenging, but engaging, yielding a desirable experience, not just an event to be tolerated, OR what is learning experience design?

Can you imagine creating a ‘course’ that wins raving fans?   It’s about designing learning that is not only effective but seriously engaging.   I believe that this is not only doable, but doable under real world constraints.

Let me start with this bit of the wikipedia definition of experience design:

the practice of designing…with a focus placed on the quality of the user experience…, with less emphasis placed on increasing and improving functionality

That is, experience design is about creating a user experience, not just focusing on their goals, but thinking about the process as well.     And that’s, to me, what is largely ignored in creating elearning is thinking about process from the learner’s perspective. There are really two components: what we need to accomplish, and what we’d like the learner to experience.

Our first goal still has to look at the learning need, and identify an objective that we’d like learners to meet, but even that we need to rethink.   We may have constraints on delivery environment, resources, and more that we have to address as well, but that’s not the barrier.   The barrier is the mistake of focusing on knowledge-level objectives, not on meaningful skill change.   Let me be very clear: one of the real components of creating a learning experience is ensuring that we develop, and communicate, a learning objective that the learner will ‘get’ is important and meaningful to them.   And we have to take on the responsibility for making that happen.

Then, we need to design an experience that accomplishes that goal, but in a way that yields a worthwhile experience.   I’ve talked before about the emotional trajectory we might want the learner to go through.   It should start with a (potentially wry) recognition that this is needed, some initial anxiety but a cautious optimism, etc.   We want the learner to gradually develop confidence in their ability, and even some excitement about the experience and the outcome.   We’d like them to leave with no anxiety about the learning, and a sense of accomplishment.   There are a lot of components I’ve talked about along the way, but at core it’s about addressing motivation, expectations, and concerns.

Actually, we might even shoot for more: a transformative experience, where the learner leaves with an awareness of a fundamental shift in their understanding of the world, with new perspectives and attitudes to accompany their changed vocabulary and capabilities.   People look for those in many ways in their life; we should deliver.

This does not come from applying traditional instructional design to an interview with a SME (or even a Subject Matter Network, as I’m increasingly hearing and inclined to agree).   As I defined it before, learning design is the intersection of learning, information, and experience design.   It takes a broad awareness of how we learn, incorporating viewpoints behavior, cognitive, constructive, connective, and more.   It takes an awareness of how we experience: media effects on cognition and emotion, and of the dramatic arts.   And most of all, it takes creativity and vision.

However, that does not mean it can’t be developed reliably and repeatably, on a pragmatic basis.     It just means you have to approach it anew.   It take expertise, and a team with the requisite complementary skill sets, and organizational support. And commitment.   What will work will depend on the context and goals (best principles, not best practices), but I will suggest that with good content development processes, a sound design approach, and a will to achieve more than the ordinary.   This is doable on a scalable basis, but we have to be willing to take the necessary steps.   Are you ready to take your learning to the next level, and create experiences?

The Augmented Performer

2 December 2009 by Clark 4 Comments

The post I did yesterday on Distributed Cognition also triggered another thought, about the augmented learner.   The cited post talked about how design doesn’t recognize the augmented performer, and this is a point I’ve made elsewhere, but I wanted to capture it in a richer representation.   Naturally, I made a diagram:

DistributedCognitionIf we look at our human capabilities, we’re very good pattern matchers, but pretty bad at exercising rote performance.   So we can identify problems, and strategize about solutions, but when it comes to executing rote tasks, like calculation, we’re slow and error prone.   From the point of the view of a problem we’re trying to solve, we’re not as effective as we could be.

However, when we augment our intellect, say with a networked device (read: mobile), we’re augmenting our problem-solving and executive capability with some really powerful calculations capability, and also some sensors we’re typically not equipped with (e.g. GPS, compass), as well as access to a ridiculously huge amount of potential information through the internet, as well as our colleagues.   From the point of view of the problem, we’re suddenly a much more awesome opponent.

And that is the real power of technology: wherever and whenever we are, and whatever we’re trying to do, there’s an app for that.   Or could be.   Are you empowering your performers to be awesome problem-solvers?

Distributed Thinking & Learning

1 December 2009 by Clark 2 Comments

A post I was pointed to reviews a chapter distributed thinking, a topic I like from my days getting to work with Ed Hutchins and his work on Distributed Cognition.   It’s a topic I spoke about at DevLearn, and recently wrote about.   The chapter is by David Perkins, one of premier thinkers on thinking, and I like several things he says.

For one, he says: “typical psychological and educational practices treat the person in a way that is much closer to person-solo”.   I think that’s spot-on, we don’t tend to train for, and design for, the augmented human, and yet we know from situated cognition and distributed cognition that much of the problem solving we do is augmented in many ways, from pencil and paper, to calculators, references, and mobile devices.

I also like his separation of task solving from executive function, where executive function is the searching, sequencing, etc of the underlying domain-specific tasks, and how he notes that just because you create an environment that requires executive functioning, it doesn’t mean the learner will be able to develop those skills.   “In general, cognitive opportunities are not in themselves cognitive scaffolds.”   So treat all those so-called ‘edutainment’ games that claim to develop problem-solving skills with great care; they may require it, but there’s little effort I’ve seen that they actually develop it.

The implication is that having kids solve problems with executive support, but without scaffolding that executive support and the gradual release of those executive skills to the learner, we’re not really developing appropriate problem-solving skills.   We don’t talk explicitly about them, and consequently leave the acquisition of those skills to chance.   If we don’t put 21st century skills into our courses, K12, higher ed, and organizational, we’re not really developing our performers.

And that, at the end of the day, is what we need to be doing.   So, start thinking a bit broader, and deeper, about learning and the components thereof, and produce better learning, learners, and ultimately the outcome performance.

Who authorizes the authority?

28 November 2009 by Clark 2 Comments

As a reaction to my eLearnMag editorial on the changing nature of the educational publishing market, Publish or Perish, a colleague said: “There is a tremendous opportunity in the higher ed publishing market for a company that understands what it means to design and deliver engaging, valuable, and authentic customer experiences–from content to services to customer service and training.”

I agree, but it triggered a further thought. When we go beyond delivering content as a component of a learning experience, and start delivering learning experiences, are we moving from publisher to education provider?   And if so, what are the certification processes?

Currently, institutions are accredited by accrediting bodies.   Different bodies accredit different things.   There are special accrediting bodies (a.g. AACSB or ACBSP for business[2?], ABET for applied science).   In some cases, there are just regional accreditation bodies (e.g. WASC).     There’s overlap, in that a computer science school might want to align with ABET, and yet the institution has to be accredited by, say, WASC.

And I think this is good, in that having groups working to oversee specific domains can be responsive to changing demands, and general accreditation to oversee ongoing process.   I recall in the past, this latter was largely about ensuring that there were regular reviews and specific improvement processes, almost an ISO 9001 approach. However, are they really able to keep up?   Are they in touch with new directions?   The recent scandals around business school curricula seem to indicate some flaws.

On the other hand, who needs accreditation?   We still have corporate universities, they don’t seem to need to be accredited except by their organization, though sometimes they partner with institutions to deliver accredited programs. And many people provide coaching services, and workshops.   There are even certificates for workshops which presumably depend on the quality of the presenter, and sometimes some rigor around the process to ensure that there’s feedback going on so that continuing education credits can be earned.

My point is, the standards vary considerably, but when do you cross the line? Presumably, you can’t claim outcomes that aren’t legitimate (“we’ll raise your IQ 30 points” or somesuch), but otherwise, you can sell whatever the market will bear.   And you can arrange to be vetted by an independent body, but that’s problematic from a cost and scale perspective.

Several issues arise from this for me.   Say you wanted to develop some content (e.g. deeper instructional design, if you’re concerned like me about the lack of quality in elearning).   You could just put it out there, and make it available for free, if you’ve the resources.   Otherwise, you could try to attach a pricetag, and see if anyone would pay.   However, what if you really felt it was a definitive suite of content, the equivalent of a Master’s course in Instructional Technology?   You could sell it, but you couldn’t award a degree even if you had the background and expertise to make a strong claim that it’s a more rigorous degree than some of those offered by accredited institutions, and more worthwhile.

The broader question, to me, is what is the ongoing role of accreditation?   I’ve argued that the role of universities, going forward, will likely be to develop learning to learn skills. So, post your higher ed experience (which really should be accomplished K12, but that’s another rant), you should be capable of developing your own skills.   If you’ve developed your own learning abilities, and believe you’ve mastered an area, I guess you really only need to satisfy your current or prospective employer.

On the other hand, an external validation certainly makes it easier to evaluate someone rather than the time-intensive process of evaluation by yourself.   Maybe there’s a market for much more focused evaluations, and associated content?

So, will we see broader diversity of acceptable evaluations, more evaluation of the authorial voice of any particular learning experience, a lifting of the game by educational institutions, or a growing   market of diverse accreditation (“get credit for your life experience” from the Fly By Night School of Chicanery)?

Who are mindmaps for?

13 November 2009 by Clark 9 Comments

In response to my recent mindmap of Andrew McAfee’s conference keynote (one of a number of mindmaps I’ve done), I got this comment:

Does the diagram work as a useful way of encapsulating the talk for someone who was there? Because, speaking as someone who wasn‘t, I find it almost entirely content-free. Just kind of a collection of buzz-phrases in thought bubbles, more or less randomly connected.

I‘m not trying to criticise his talk – which obviously I didn‘t hear – or his points – which I still have no idea about – but the diagram as a method of conveying information is a total failure to this sample size of one. Possibly more useful as a refresher mechanism for people who got the talk in its original form?

Do mindmaps work for readers?   Well, I have to admit one reason I mindmap is completely personal.   I do it to help me process the presentation. Depending on the speaker, I can thoughtfully reprocess the information, or sometimes just take down interesting comments, but there are several benefits: In figuring out the ways to link, I’m capturing the conceptual structure of the talk (really, they’re concept maps), and I’m also occupying my personal bandwidth enough to allow me to focus on the talk without my mind chasing down one path and missing something.   Er, mostly…

Then, for a second category, those who actually heard the talk, they might be worthwhile reflection and re-processing.   I’d welcome anyone weighing in on that. I don’t have access to someone else’s example to see whether it would work for me.

Then, there are the potential viewers, like the commenter, for whom it’s either possible or not to process any coherent idea out of the presentation.   I looked back at the diagram for McAfee’s keynote, and I can see that I was cryptic, missing some keywords in communicating. This was for two reasons: one, he was quick, and it was hard to get it down before he’d moved on.   Two, he was eloquent, and because he was quick I couldn’t find time to paraphrase.   And there’s a more pragmatic reason; I try to constrain the size of the mindmap, and I’m always rearranging to get it to fit on one page.   That effort may keep me more terse than is optimal for unsupported processing.

I will take issue with “more or less randomly connected”, however.   The connections are quite specific.   In all the talks I’ve done this for, there have been several core points that are elaborated, in various ways by talk, but each talk tends to be composed of a replicated structure.   The connections capture that structure.   For instance, McAfee repeatedly took a theme, used an example to highlight it, then derived a takehome point and some corollaries.   There would be ways to more eloquently convey that structure (e.g. labeled links, color coding), but the structure isn’t always laid out beforehand (it’s emergent), and is moving fast enough that I couldn’t do it on the fly.

I could post-process more, but in the most recent two cases I wanted to get it up quickly: when I tweeted I was making the mindmap, others said they were eager to see it, so I hung on for some minutes after the keynotes to get it up quickly.   McAfee himself tweeted “dang, that was FAST – nice work!”

I did put the arrow in the background to guide the order in which the discussion came, as well, but apparently it is too telegraphic for the non-attendee. It happens I know the commenter well, and he’s a very smart guy, so if he’s having trouble, that’s definitely an argument that the raw mindmap alone is not communicative, at least not without perhaps some post-processing to make the points clear.

Really valuable to get the feedback, and worthwhile to reflect on what the tradeoffs are and who benefits. It may be that these are only valuable for fellow attendees.   Or just me. I may have to consider a) not posting, b) slowing down and doing more post-processing, or…?   Comments welcome!

Game-based meta-cognitive coaching

15 October 2009 by Clark 1 Comment

Many years ago, I read of some work being done by Valerie Shute and Jeffrey Bonar that I later got a chance to actually play a (very small) role in (and even later got to work with Valerie, definitely world-class talent).   They had developed three separate tutoring environments (geometric optics, economics, electrical circuits), yet the tutoring engine was essentially the same across all three, not domain specific.   The clever thing they were doing was tutoring on exploration skills, varying one variable at a time, making reasonable increments in values to graph trends, etc.

Subsequent to that, I got involved again in games for learning. What naturally occurred to me was that you could put the same sort of meta-cognitive skill tutoring in a game environment, as you have to digitally create all the elements you’d need to track anyways for the game reasons, and it could be a layer on top.   While this would work in a single game (and we did put a small version into the Quest game), it would be even better on top of a game engine.   I even proposed it as a research project, but the grant reviewers thought that while   a good idea, it was too ambitious (ahead of my time and underestimated :).

The reinterest in so-called 21st century skills, the kind Stephen Downes so eloquently calls an Operating System for the Mind, reawakens the opportunity.   These skills are manifested in activity, and require an understanding of the activity to be able to infer approaches and provide feedback. In a well-defined arena like a designed game environment, we can know the goals and possible actions, and start looking for patterns of behavior.

Game engines, with their fixed primitives, make it easier to define what goals are and consequently to specify the particular goals and makes looking for patterns more generally definable.   Thus, in a game, we can see whether the learners’ exploration is systematic, whether their attempts are as informative as possible, and possibly more.

This is also true of virtual worlds, although only when designed with goals (e.g. from a simulation to a scenario, whether tuned into a game or not).   The benefit of a virtual world is, again, the primitives are fixed, simplifying the task of defining goals and actions.

Of course, building particular types of interaction (e.g. social), particular types of clues (e.g. audio versus visual) and looking for patterns can provide deeper opportunities.   Really, such performance is initially an assessment (one of the facets of what we were doing on the Intellectricity project was building a learner characteristic assessment as a game), and that assessment can trigger intervention as a consequence.   For any malleable skill, we have real opportunities.

Given that much of what is necessary are abilities to research , evaluate the quality of sources, design, experiment, create, and more, these environments are a fascinating opportunity.   I’m not in a situation to lead such an initiative, but I still think it’s a worthwhile undertaking.   Anyone ‘game’?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok