Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Co-design of workflow

6 August 2010 by Clark 3 Comments

I’ve talked before about how our design task will need to accommodate both the formal learning and the informal job resources, but as I’ve been thinking about (and working on) this model, it occurs to me that there is another way to think about learning design that we have to consider.

The first notion is that we should not design our formal learning solutions without thinking about what the performance support aspects are as well.   We need to co-design our performance support solutions along with our preparation for performance so that they mutually reflect (and reference) each other. Our goal has to be to look at the total development and execution of the task.

The other way I’ve now been thinking of it, however, is to think about designing the workflow and the learning ‘flow’ together.   Visualize the formal and informal learning flows as components within an overall workflow.   You want the performer focusing on the task, and learning tools ‘to hand’ within the task flow.   Ideally, the person is able to find the answers, or even learn some new things, while still in the work context. (Context is so important in learning that we spend large amounts to recreate context away from our existing work context!)

The point being, not only is formal learning and informal learning co-designed, but they’re both co-designed in the context of understanding the flow of performance, so you’re designing the work/learning context.   Which means we’re incorporating user-interface and user-experience design, as well as resource design (e.g. technical communications) on top of our learning design.   And probably more.

Now, are you ready to buy this?   Because I’d talked myself to this point and then realized: “but wait, there’s more. If you call now, we’ll throw in” an obvious extension. To be covered in the next and last post of this series (tying it back to the context of explorability and incremental advantage I started with in my last post.

Explorability and Incremental Advantage

5 August 2010 by Clark Leave a Comment

During a summer internship at NASA, many years ago, I met a researcher who was conceptualizing the interface property of ‘explorability’. I can’t claim that I accurately can communicate the nuances of Jean-Marc Robert‘s model, but I was intrigued with the notion. The idea that interfaces could differ on the extent that they supported experimentation and subsequent comprehension seemed valuable. The requisite property would be predictability, requiring consistency, and learnable interfaces would empower users.

A related concept is Andi diSessa‘s ‘incremental advantage’, where he proposed that interfaces should elegantly allow the investment by a user to learn more to provide more power. So his Boxer software environment supported gradual addition of concepts to yield more computational capability. The underlying notion of ‘the more you learn, the   more you can do’ again seems like a user-empowering concept.

Fast-forward a few years, and as a newly-minted academic using HyperCard for student interface design projects, I recognized that the notion of buttons, fields, and backgrounds provided a reasonable implementation of the ideas of explorability and incremental advantage. I proposed that the key idea was supporting correct inferences about how to make things happen. Interestingly, the English-like nature of HyperTalk supported both some correct and some incorrect inferences about making more complex logic.

As a side note, a combination between software design supporting a strong conceptual model, and software training that builds the model (not rote procedures), strikes me as a learning approach that is far more powerful but seldom seen.   Similarly for other learning outcomes, models are powerful thinking tools that we do not leverage sufficiently.

The reason I mention this is two-fold; I want to bring this concept to light, and to build on it. As I mentioned before, I think we need to make editable environments to support collaborative tool building. This will become more important, going forward, for reasons that I intend to elaborate across two subsequent posts. Stay tuned!

On principle, practice, experimentation, and theory

28 July 2010 by Clark 2 Comments

On twitter today a brief conversation ensued about best practices versus best principles.   I’ve gone off on this before ( I think Dilbert sums it up nicely), and my tweet today captures my belief:

“please, *not* best practices; abstract best principles and recontextualize!”

However, I want to go further.

Several times recently I’ve had people ask for research that justifies a particular position. And at a micro-level, that makes sense.   But there’s little ‘micro’ about the types of problems we solve.   So I hear it at a larger level: “why should we make learning more scenario-based”, or “what is the empirical evidence about social learning in the organization”.   And the problem is, you can’t really answer the question the way they think you should be able to. On principle (heh).

The problem is, most empirical research tends to be done around very small situations: these 3 classrooms were trialed in this state or province.   In many cases, there just hasn’t been the specific studies that are close enough to make a reasonable inference. And it’s hard to coordinate large studies that are really generalizable for pragmatic reasons that include logistics and funding.

What’s done instead, when sufficient cases arise, are meta-studies (as the recent one that said online learning was somewhat better than face to face), that tend to look across research, but you need a sufficient quantity of comparable studies (and someone capable and motivated).   Or, you can point to long programs of studies that are based around theoretical positions (e.g. John Sweller’s Cognitive Load Theory).   And expert practitioners typically have created   or procedures across long experience that can guide you.   In any case, you’re making inferences from a variety of studies and models.     One of my favorite models (Cognitive Apprenticeship) actually came from finding some synergy across several bodies of work.

So what’s a person to do? Sure, if you can find that specific relevant experiment, go for it.   Otherwise:

  • look to what others do, but don’t try to immediately adopt their practices, look to find the underlying principles and adapt those,
  • look to theories folks have proposed, and see how they might guide your approach,
  • bring in someone who’s had experience doing this,
  • or, think through it yourself, conceptualize the relationships, and determine what should be appropriate approaches.

(Note that the latter likely will take longer.) This is a ‘design-based research‘ approach, and to continue you need to trial, evaluate, and refine. Please do bring your reflections back to the conceptual domain.   We need more transparency!

The point I’m trying to make here is that, particularly in the learning sciences (e.g. when you’re working with the human brain), the properties aren’t as predictable as cement or steel; there is a bit of ‘uncertainty principle‘ going on (studying it changes the situation), and your intervention can very much affect how the individual perceives the task and possibilities.   You should expect to do some iteration and tuning.   And your bases for decision will not be individual research studies, by and large, but frameworks, models, and inferences.

Still, it’s systematic, based upon research and theory, and the best we can do.   So what are you waiting for?

Brain science in design?

3 July 2010 by Clark 4 Comments

The Learning Circuits Blog Big Question of the Month is “Does the discussion of “how the brain learns” impact your eLearning design?”   My answer is in several parts.

The short answer is “yes”, of course, because my PhD is in Cognitive Psychology (really, applied cognitive science), and I’ve looked at cognitive learning, behavioral learning, social constructivist learning, connectionist learning, even machine learning, looking for guidance about how to design better learning experiences.   And there is good guidance.   However, most of it comes from research on learning, not from neuroscience.

The longer answer has some caveats.   Some of the so-called brain science ranges from misguided to outright misleading.   Some of the ‘learning styles’ materials claim to be based in brain structure, but the evidence is suspect at best.   Similarly, some of the inferences from neural structures are taken inappropriately.   There’s quite a bit of excitement, and fortunately some light amidst the heat and smoke.   In short, there’s a lot of misinformation out there.

At the end of the day, the best guidance is still the combination of empirical results from research on how we learn, a ‘design research’ approach with iterative testing, and some inspiration in lieu of what still needs to be tested (e.g. engagement).   I think that we know a lot about designing effective learning, that is based in how our brains work, but few implications from the physiology of the brain.   As others have said, the implications at one layer of ‘architecture’ don’t necessarily imply higher levels of phenomena.   We’ve lots to learn yet about our brains.

As with so many other ‘snake oil’ issues, like multigenerational differences, learning styles, digital natives, etc, brain-based learning appears to be trying to sell you a program rather than a solution. Look for good research, not good marketing.   Caveat emptor!

On magic, or the appearance thereof

25 June 2010 by Clark 3 Comments

Many years ago, I responded to a broad query by Jefferey Bonar asking what was the interface metaphor we really wanted.   I responded something to the effect of wanting ‘magic’.     This was in the early days of the desktop metaphor, and we were already looking to go beyond, and I was looking for the ultimate metaphor of control.

Now I didn’t mean magic in the ‘legerdemain’, sleight-of-hand type of thing, nor the magic I feel when sitting on the deck on a warm summer evening with my family, but instead the classic form with incantations, artifacts, etc. What I really wanted was to be empowered, and the best metaphor for total power I can imagine is having the ability to bring things into being, to have questions answered, to control the world with mere gestures and commands. And yet, even that has to have some structure.   As Clay Kallam wrote in a recent column comparing two recent fantasy books:

“The plot of both books relies heavily on the magic, but Coe is careful to explain how his works and its limitations and impact.   Drake seems to just call on some whenever it suits him, and nothing is explained.”

So, what I meant was that there was rigor underlying the metaphor of magic, rigor that roughly parallels the structures of programming languages.   For example, Rob Moser (my PhD student) prototyped a game for his thesis that taught programming via learning to cast magic spells in a fantasy world.   My vision was that in any place you wanted to, you could learn the underlying magic (language) to accomplish what you wanted, but if you didn’t, you’d be able to buy artifacts (e.g. wands, crystal balls, etc) that did specific things that you wanted without having to program.

The reason I mention this, before you think I’m going off with the fairies and unicorns, is that there are reasons to start thinking about magic.   As Arthur C. Clarke has said:

Any truly advanced technology is indistinguisable from magic.

And I really think we’re there. That is, our technology has advanced to the point that the technology is no longer a barrier.   We can truly bring any information, any person (at least virtually), anywhere we want.   We can augment our world with information to make us substantially more effective: we can talk through ‘mirrors’ (video portals) to others, actually seeing them; we can bring up ‘demons’ (agents) to go find information for us, we can send out commands to make things happen at a distance, we can unveil previously hidden information about the environment to start making conceptual links between there and our understanding to make us smarter.

There’s more required, such as Andi diSessa’s “incremental advantage”, and more accessible ways to specify our intentions, but with really powerful metaphors emerging (styles is something everyone should get their minds around), with gestural interfaces, and the ability to control games with our bodies, and with augmented reality aka Heads-Up Displays for civilians, we’ve got the tools.   What we need is the perspectives and the will.

This is important from the point of view of designing new solutions.   Years ago, when I taught interface design, I told my students that one of the pieces in their exploration of the design space should be to imagine what they would do when they had ‘magic’.   To be more specific, once you’ve gathered the requirements, before you see what others have done and start limiting yourself to pragmatics, imagine what you’d do with no limitations (ok, except mind-reading, I’m just not going there).   Given that among our cognitive architectural pre-dispositions is to prematurely converge on solutions, we need lateral input.   By exploring the possibilities space in a more unhampered way, we might come across a solution that’s inspired, not tired, and revolutionary, not evolutionary.

This, however, is not just interface design, but specifically learning and performance support design.   What would you do if you had magic to help meet your learning and performance needs?   Because you have it.   Really.

So think magically, not in the trivial sense, but in the sense that we have awesome powers at our command.   The limitations are no longer the technology, the limits are between our ears (and, occasionally, in our wallets or will).   Go forth and empower!

John Romero keynote mind map #iel2010

2 June 2010 by Clark Leave a Comment

Here’s my mind map of John Romero’s keynote on social gaming (again, done with OmniGraffle on my iPad) (smaller then Kay, as he only talked for half an hour):

Alan Kay keynote mindmap from #iel2010

2 June 2010 by Clark 2 Comments

Today, one of my heroes, Alan Kay, gave a keynote to the Innovations in eLearning conference. The mindmap can’t convey the broad range, but to get it out there…

Performer-focused Integration

17 May 2010 by Clark Leave a Comment

On a recent night, I was part of a panel on the future of technical communication with the local chapter of the Society for Technical Communication, and there were several facets of the conversation that I found really interesting.   Our host had pulled together an XML architecture consultant who’s deep into content models (e.g. DITA) and tools, Yas Etassam, and another individual who started a very successful technical writing firm, Meryl Natchez.   And, of course, me.

My inclusion shouldn’t be that much of a surprise. The convener had heard me speak on the performance ecosystem (via Enterprise 2.0, with a nod to my ITA colleagues), and I’d included mention of content models, learning experience design, etc.   My background in interface design (e.g. studying under Don Norman, as a consequence teaching interface design at UNSW), and work with publishers and adaptive systems using content models, means I’ve been touching a lot of their work and gave a different perspective.

It was a lively session, with us disagreeing and then finding the resolution, both to our edification as well as the audiences. We covered new devices, tools, and movements in corporate approaches to supporting performance, as well as shifts in skill sets.

The first topic that I think is of interest was the perspective they took on their role.   They talk about ‘content’ and include learning content as well.   I queried that, asking whether they saw their area of responsibility covering formal learning as well, and was surprised to hear them answer in the affirmative. After all, it’s all content.   I countered with the expected: “it’s about the experience” stance, to which Meryl replied to the effect of “if I’m working, I just want the information, not an experience”.   We reconciled that formal learning, when learners need support for motivation and context, needed the sort of experience I was talking about, but even her situation required the information coming in a way that wasn’t disruptive: we needed to think about the performer experience.

The other facet to this was the organizational structure in this regard. Given the view that it’s all content, I asked whether they thought they covered formal learning, and they agreed that they didn’t deliver training, but often technical writers create training materials: manuals, even online courses.   Yet they also agreed, when pushed, that most organizations weren’t so structured, and documentation was separate from training.   And we all agreed that, going forward, this was a problem. I pushed the point that knowledge was changing faster than their processes could cope, and they agreed.   We also agreed that breaking down those silos and integrating performance support, documentation, learning, eCommunity, and   more was increasingly necessary.

This raised the question of what to do about user generated content: I was curious what they saw as their role in this regard.   They took on a content management stance, for one, suggesting that it’s content and needed to be stored and made searchable.   Yas talked about the powerful systems that folks are using to develop and manage content.   We also discussed the analogy to learning in that the move is from content production to content production facilitation.

One of the most interesting revelations for me actually came before the panel in the networking and dinner section, where I learned about Topic-Based Authoring. I’ve been a fan of content models for over a decade now, from back when I was talking about granularity of learning objects.   The concept I was promoting was to write tightly around definitions for introduction components, concept presentations, examples, practice items, etc. It takes more discipline, but the upside is much more powerful opportunities to start doing the type of smart delivery that we’re now capable of and even seeing.   Topic-based is currently applied for technical needs (e.g. performance support) which is enough reason, but there can and should be educational applications as wellThe technical publications area is a bit ahead on this front.   Topic-based authoring is a discipline around this approach that provides the rigor needed to make it work.

Meryl pointed out how the skill set shift needn’t be unitary: there were a lot of areas that are related in their world: executive communications, content management, information architecture, even instructional design is a potential path.   The basics of writing were still necessary, but like in our field, facilitation skills for user-generated content may still play a role. The rate of change means that the technical writers, just like instructional designers, won’t be able to produce all the needed information, and that a way for individuals to develop materials would be needed. As mentioned above, Yas just cared that they did the necessary tagging!   Which gets into interest system areas about how can we make that process as automatic as possible and minimize the onerous part of the work.

The integration we need is for all those who are performer-focused to not be working in ignorance of (let alone opposition to) each other.   Formal learning should be developed in awareness of the job aids that will be used, and vice-versa.   The flow from marketing to engineering has to stop forking as the same content gets re-purposed for documentation, customer training, sales training, and customer service, but instead have a coherent path that populates each systematically.

Training Book Reviews

14 May 2010 by Clark 2 Comments

The eminent Jane Bozarth has started a   new site called Training Book Reviews.   Despite the unfortunate name, I think it’s a great idea: a site for book reviews for those of us passionate about solving workplace performance needs.   While submitting new reviews would be great, she notes:

share a few hundred words

1) on a favorite, must-own title, or maybe even

2) of criticism about a venerated work that has perhaps developed an undeserved glow

In the interest of sparking your participation (for instance, someone should write a glowing review of Engaging Learning :), here’s a contribution:

More than 20 years ago now, Donald Norman released what subsequently became the first of a series of books on design.   My copy is titled The Psychology of Everyday Things, (he liked the acronym POET) but based upon feedback, it was renamed The Design of Everyday Things as it really was a fundamental treatise on design.     And it has become a classic. (Disclaimer, he was my PhD advisor while he was writing this book.)

Have you ever burned yourself trying to get the shower water flow and temperature right?   Had trouble figuring out which knob to turn to turn on a particular burner on the stove?   Push on a door that pulls or vice-versa?   Don explains why.   The book looks at how our minds interact with the world, how we use the clues that our current environment provides us coupled with our prior experience to figure out how to do things. And how designers violate those expectations in ways that reliably lead to frustration.   While Don’s work   on design had started with human-computer interaction and user-centered design, this book is much more general.   Quite simply, you will find that you look at everyday things: shower controls, door handles, and more in a whole new way.

The understanding of how we understand the world is not just for furniture designers, or interface designers, but is a critical component of how learning designers need to think.   While his subsequent books, including Things That Make Us Smart and Emotional Design, add deeper cognition and engagement (respectively) and more, the core understanding from this first book provides a foundation that you can (and should) apply directly.

Short, pointed, and clear, this book will have you nodding your head in agreement when you recognize the frustrations you didn’t even know you were experiencing.   It will, quite simply, change the way you look at the world, and improve your ability to design learning experiences. A must read.

Interactivity & Mobile Development

12 May 2010 by Clark 1 Comment

A while ago, I characterized the stages of web development as:

  • Web 1.0: producer-generated content, where you had to be able to manage a server and work in obscure codes
  • Web 2.0: user-generated content, where web tools allowed anyone to generate web content
  • Web 3.0: system-generated content, where engines or agents will custom-assemble content for you based upon what’s known about you, what context you’re in, what content’s available, etc

It occurred to me that an analogous approach may be useful in thinking about interactivity.   To understand the problem, realize that there has been a long history of attempts to characterize different levels of interactivity, e.g. Rod Sims’ paper for ITFORUM, for a variety of reasons. More recently, interactivity has been proposed as a item to tag within learning object systems to differentiate objects.   Unfortunately, the taxonomy has been ‘low’, ‘medium’ and ‘high’ without any parameters to distinguish between them. Very few people, without some guidance, are going to want to characterize their content as ‘low’ interactivity.

Thinking from the perspective of mobile content, it occurred to me that I see 3 basic levels of interaction. One is essentially passive: you watch a video, listen to an audio, or read a document (text potentially augmented by graphics). This is roughly equivalent to producer-generated content.   The next level would be navigable content.   Most specifically, it’s hyper-documents (e.g. like the web), where users can navigate to what they want. This comes into play for me on mobile, as both static content and navigable content are easily done cross-platform.   I note that user-generated content through most web interfaces is technically beyond this level.

The next level is system-generated interaction, where what you’ve done has an effect on what happens next.   The web is largely state-independent, though that’s changing (e.g. Amazon’s mass-customization). This is where you have some computation going on in the background, whether it’s form processing or full game interaction.   And, this is where mobile falls apart.   Rich computation and associated graphics are hard to do.   Flash has been the lingua franca of online interactivity, supporting delivery cross-platform.   However, Flash hasn’t run well on mobile devices, it is claimed, for performance reasons.   Yet there is no other cross-platform environment, really.   You have to compile for each platform independently.

This analysis provides 3 meaningful levels of interactivity for defining content, and indicates what is currently feasible and what still provides barriers for mobile as well.   The mobile levels will change, perhaps if HTML 5 can support more powerful computation, interaction, and graphics, or if the performance problems (or perception thereof) go away.   Fingers crossed!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok