Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: align

Scenarios and Conceptual Clarity

10 December 2015 by Clark 5 Comments

I recently came across an article ostensibly about branching scenarios, but somehow the discussion largely missed the point.  Ok, so I can be a stickler for conceptual clarity, but I think it’s important to distinguish between different types of scenarios and their relative strengths and weaknesses.

So in my book  Engaging Learning, I was looking to talk about how to make engaging learning experiences.  I was pushing games (and still do) and how to design them, but I also wanted to acknowledge the various approximations thereto.  So in it, I characterized the differences between what I called mini-scenarios, linear scenarios, and contingent scenarios (this latter is what’s traditionally called branching scenarios).  These are all approximations to full games, with various tradeoffs.

At core, let me be clear, is the need to put learners in situations where they need to make decisions. The goal is to have those decisions closely mimic the decisions they need to make  after the learning experience. There’s a context (aka the story setting), and then a specific situation triggers the need to make a decision.  And we can deliver this in a number of ways. The ideal is a simulation-driven (aka model-driven or engine-driven) experience.  There’s  a model of the world underneath that calculates the outcomes of your action and determines whether you’ve yet achieved success (or failure), or generates  a new opportunity to act.  We can (and should) tune this into a serious game.  This gives us deep experience, but the model-building is challenging and there are short cuts.

MiniScenarioIn  mini-scenarios, you put the learner in a setting with a situation that precipitates a decision.  Just one, and then there’s feedback.   You could use video, a graphic novel format, or just prose, but the game problem is a setting and a situation, leading to choices. Similarly, you could have them respond by selecting option A B or C, or pointing to the right answer, or whatever.  It stops there. Which is the weakness, because in the real world the consequences are typically more complex than this, and it’s nice off the learning experience reflects that reality.  Still, it’s better than knowledge test.  Really, these are  just a better written multiple choice question, but that’s at least a start!

LinearScenarioLinear scenarios are a bit more complex. There are a series of game problems in the same context, but whatever the player  chooses, the right decision is ultimately made, leading to the next problem. You use some sort of sleight of hand, such as “a supervisor catches the mistake and rectifies it, informing you…” to make it all ok.  Or, you can terminate out and have to restart if you make the wrong decision  at any point.  These are a step up in terms of showing the more complex consequences, but are a bit unrealistic.  There’s some learning power here, but not as much as is possible.  I have used them as sort of multiple mini-scenarios with content in between, and  the same story is used for the next choice, which at least made a nice flow. Cathy Moore  suggests  these  are valuable for novices, and I think it’s also useful if everyone needs to receive the same ‘test’ in some accreditation environment to be fair and balanced (though in a competency-based world they’d be better off with the full game).

BranchingScenarioThen there’s the full branching scenario (which I called contingent scenarios in the book, because the consequences and even new decisions are contingent on your choices).  That is, you see different opportunities depending on your choice. If you make one decision, the subsequent ones are different.  If you don’t shut down the network right away, for instance, the consequences are different (perhaps a breach) than if you do (you get the VP mad).  This, of course, is much  more like the real world.  The only difference between this and a serious  game is that the contingencies in the world are hard-wired in the branches, not captured in a separate model (rules and variables). This  is easier, but it gets tough to track if you have too many  branches. And the lack of an engine  limits the replay and ability to have randomness.  Of course, you can make several of these.

So the problem I had with the article  that triggered this post is that their generic model looked like a mini-scenario, and nowhere did they show the full concept of a real branching scenario. Further,  their example was really a linear scenario, not a branching scenario.  And I realize this may seem like an ‘angels dancing on the head of a pin’, but I think it’s important to make distinctions when they affect the learning outcome, so you can more clearly make a choice that reflects the goal you are trying to achieve.

To their credit, that they  were pushing for contextualized decision making at all is a major win, so I don’t want to quibble too much.  Moving our learning practice/assessment/activity to more contextualized performance is a good thing.  Still, I  hope this elaboration is useful  to get more nuanced solutions.  Learning design really can’t be treated as a paint-by-numbers exercise, you really should know what you’re doing!

Evidence for benefits: Towards Maturity Report

30 November 2015 by Clark 1 Comment

An organization that I cited in the Revolution book, Towards Maturity, has recently released their 2015-2016 Industry Benchmark Report, and it’s of interest to individuals and organizations looking for real data on what’s working, and not, in L&D.  Towards Maturity has been collecting benchmarking data on L&D practices for over a decade, and what they find bolsters the case to move L&D forwards.

The report has a number of useful sections, including documenting the current state of the industry, guidance for business leaders on expectations, on listening to learners, and on rethinking  the L&D team.  Included are some top level pointers for executives and L&D.  And while the report is  biased towards Europe, respondents cover the globe including Asia, Americas, and more.

Overall, they’re finding a 19% average in  technology spending out of L&D budgets (and this has been essentially flat for 3 years). This seems light;  given that technology is a key enabler of performance and development, such a figure doesn’t seem appropriate.  Of course, given that 55% of formal learning is still delivered face-to-face, this isn’t surprising.

A more interesting outcome is comparing what they call  Top Deck organizations; those in the top 10% of their Towards Maturity Index. These organizations are characterized by four elements that are tied to success:

  • Learning aligned to need
  • Active learner voice
  • Design beyond the course
  • Proactive in connecting

Here we see key elements of the revolution. For one, learning isn’t done on demand, but is coupled to organizational improvements.  For another, the learner is engaged in the processes of determining what solutions make sense.  One that intrigues me is that the solutions go beyond courses, looking at performance support and more. And finally, L&D is reaching out across silos to engage in conversations.  These are all key to achieving results from 6 – 8 times the average organization.

The advice to business leaders also echoes the revolution. The call is to focus on performance, not on courses.  It’s not about learning, it’s about outcomes.  The recommendation  is to break down silos so as to achieve the conversations that will achieve meaningful impact.

The advice goes on: understand how learners are learning, create a participatory culture, and use  real business metrics.  All grounded in what successful organizations are doing.  The point here is not to recite all the outcomes, but instead to list highlights and encourage you to have a look at the report.  Going forward, you might even consider benchmarking your own organization!

Benchmarking is best practices, and of course I encourage best principles, but the frameworks they use are grounded in the best principles, and measuring yourself against the framework and improving is really more important than comparing yourself to others.  I will suggest that  measuring yourself and evaluating your progress is a valuable investment of time in conjunction with a strategy.

What I really like, of course, is that the data support the position posited by principles that I derived from both practical experience and relevant conceptual models. The evidence is converging that there are positive steps L&D can, and should, take.  The revolution provides the roadmap, and their data provides a way to evaluate progress.  Here’s to improving L&D!

Facilitating Knowledge Work #wolweek

18 November 2015 by Clark 2 Comments

In the course of some work with a social business agency, was wondering how to represent the notion of facilitating continual innovation.  This representation emerged from my cogitations, and while it’s not quite right, I thought I’d share it as part of Work Out Loud week.

5RsThe core is the 5 R’s: Researching the opportunities, processing your explorations by either Representing them or putting them into practice (Reify) and Reflecting on those, and then Releasing them.  And of course it’s recursive: this is a  release of my representation of some ideas I’ve been researching, right?    This is very much based on Harold Jarche’s Seek-Sense-Share model for Personal Knowledge Mastery (PKM). I’m trying to be concrete about different types of activities you might do in the Sense section as I think representations  such as diagrams are valuable but very different than active application via prototyping and testing.  (And yes, I’m really stretching to keep the alliteration of the R’s.  I may have to abandon that. ;)

What was interesting to me was to think of the ways in which we can facilitate around those activities.  We shouldn’t assume good research skills, and assist individuals in doing understanding what qualifies as good  searches for input and evaluating the hits, as well as  establishing and filtering existing information streams.

We can and should also  facilitate the representations of interpretations, whether informing properties of good diagrams,  prose, or other representation forms.  We can help make the processes of representation clear as well. Similarly, we can  develop understanding of useful experimentation approaches, and how to evaluate the results.

Finally, we can communicate the outcomes of our reflections, and collaborate on all these activities whether research, representation, reification (that R is a real stretch), and reflection.  As I’m doing here, soliciting feedback.

I do believe there’s a role for L&D to look at these activities as well, and ‘training’ isn’t the solution. Here the role is very much facilitation.   It’s a different skill set, yet a fundamental contribution to the success of the organization. If you believe, like I do, that the increasing rate of change means innovation is the only sustainable differentiator for success, then this role is crucial and it’s one I think L&D has the opportunity to take on.  Ok, those are my thoughts, what are yours?

Vale Jay Cross

7 November 2015 by Clark 23 Comments

It’s too soon, so it’s hard to write this. My friend and colleague, Jay Cross, passed away suddenly and unexpectedly. He’s had a big impact on the field of elearning, and his insight and enthusiasm were a great contribution.

Version 2I had the pleasure to meet him at a lunch arranged by a colleague to introduce learning tech colleagues in the SF East Bay area.  Several of us discovered we shared an interest in meta-learning, or learning to learn, and we decided to campaign together on it, forming the Meta-Learning Lab. While not a successful endeavor in impact, Jay and I discovered a shared enjoyment in good food and drink, travel, and learning. We hobnobbed in the usual places, and he got me invited to some exotic locales including Abu Dhabi, Berlin, and India.

Jay was great to travel with; he’d read up on wherever it was and would then be a veritable  tour guide. It amazed me how he could remember all that information and point out things as we walked.  He had a phenomenal memory; he read more than anyone I know, and synthesized the information to create an impressive intellect.

After Princeton he’d gone on for an MBA at Harvard, and amongst his subsequent endeavors included creating the first MBA for the University of Phoenix.  He was great to listen to doing business, and served as a role model; I often tapped into my ‘inner Jay’ when dealing with clients.  He always found ways to add more value to whatever was being discussed.

He was influential. While others may have quibbled about whether he created the term ‘elearning’, he definitely had strong opinions about what should be happening, and was typically right.  His book  Informal Learning  had a major impact on the field.

He was also a raconteur, with great stories and a love of humor. He had little tolerance for stupidity, and could eviscerate silly arguments with a clear insight and incisive wit. As such,  he could be a bit of a rogue.  He ruffled some feathers here and there, and some could be put off by his energy and enthusiasm, but his intentions were always in the right place.

Overall, he was a really good person. He happily shared with others his enthusiasm and energy.  He mentored many, including me, and was always working to make things better for individuals, organizations, the field, and society as a whole. He had a great heart to match his great intellect, and was happiest in the midst of exuberant exploration.

He will be missed. Rest in peace.

Some other recollections of  Jay:

Harold Jarche

Jane Hart

Charles Jennings

Kevin Wheeler

Laura Overton

Inge de Waard

Alan Levine

Curt Bonk

David Kelly

Brent Schlenker

Dave Ferguson

George Siemens

Mark Oehlert

Gina Minks

John Sener

Sahana Chattopadhyay

Christy Tucker

Adam Salkeld

Learning Solutions  from the eLearning Guild

CLO Magazine

A twitter collection (courtesy of Jane Hart)

Bio from his graduating class.

#itashare

A Competent Competency Process

4 November 2015 by Clark 3 Comments

In the process of looking at ways to improve the design of courses, the starting point is good objectives. And as a consequence, I‘ve been enthused about the notion of competencies, as a way to put the focus on what people do, not what they know. So how do we do this, systematically, reliably, and repeatably?

Let‘s be clear, there are times we need knowledge level objectives. In medicine or any other field where responses need to be quick and accurate, we need a very constrained vocabulary. SO drilling in the exact meanings of words is valuable, as an example. Though ideally, that‘s coupled with using that language to set context or make decisions. So “we know it‘s the right medial collateral ligament, prep for the surgery” could serve as a context, or we could have a choice to operate on the left or right atrial ventricle as a decision point. As Van Merriënboer‘s 4 Component Instructional Design talks about, we need to separate out the knowledge from the complex problems we apply it to. Still, I suggest that what‘s likely to make a difference to individuals and organizations is the ability to make better decisions, not recite rote knowledge.

So how do we get competencies when we want them? The problem, as I‘ve talked about before, is that SMEs don‘t have access to 70% of what they actually do, it‘s compiled away. We then need good processes, so I‘ve talked to a couple of educational institutions doing competencies, to see what could be learned. And it‘s clear that while there‘s no turnkey approach, what‘s emerging is a process with some specific elements.

One thing is that if you‘re trying to cover a whole college level course, you‘ve got to break it up. Break down the top level into a handful of competencies. Then you continue to take each of those apart, and perhaps another level, ‘til you have a reasonable scope. This is heuristic, of course, but with a focus on ‘do‘, you have a good likelihood to get here.

One of the things I‘ve heard across various entities trying to get meaningful objectives is working with more than one SME. If you can get several, you have a better chance of triangulating on the right outcomes and objectives. They may well disagree about the knowledge, but if you manage the process right (emphasize ‘do‘, lather, rinse, repeat), you should be able to get them to converge. It may take some education, and you may have to let them get the

Not just any SMEs will do. Two things are really valuable: on the ground experience to know what needs to be done (and doesn‘t), and the ability to identify and articulate the models that guide the performance. Some instructors, for instance, can teach to a text but really aren‘t truly masters of the content nor are experienced practitioners. Multiple helps, but the better the SME, the better the outcome.

I believe you want to ensure that you‘re getting both the right things, and all the things. I‘ve recommended to a client about triangulating not just with SMEs, but with practitioners (or, rather, the managers of the roles the learners will be engaged in), and any other reliable stakeholders. The point is to get input from the practice as well as the theory, identifying the models that support proper behavior, and the misconceptions that underpin where they go wrong.

Once you have a clear idea of the things people need to be able to do, you can then identify the language for the competencies. I‘m not a fan of Bloom‘s (unwieldy, hard to reliably apply), but I am a fan of Mager-style definitions (action, context, metric).

After this is done, you can identify the knowledge needed, and perhaps created objectives for that, but to me the focus is on the ‘do‘, the competencies. This is very much aligned with an activity-based learning model, whereby you immediately design the activities that align with the competencies before you decide the content.

So, this is what I‘m inferring. There would be good tools and templates you could design to go with this, identifying competencies, misconceptions, and at the same time also getting stories and motivations. (An exercise left for the reader. ;) The overall goal, however, of getting meaningful objectives is key to getting good learning design. Any nuances I‘m missing?

The new shape of organizations?

20 October 2015 by Clark 2 Comments

As I read more about how to create organizations that are resilient and adaptable, there’s an interesting emergent characteristic. What I’m seeing is a particular pattern of structure that has arisen out of totally disparate areas, yet keeps repeating.  While I haven’t had a chance to think about it at scale, like how it would manifest in a large organization, it certainly bears some strengths.

ConnectedCompanyDave Grey, in his recent book The Connected Company  that I reviewed, has argued for a ‘podular’ structure, where small groups of people are connected in larger aggregations, but work largely independently.  He argues that each pod is a small business within the larger business, which gives flexibility and adaptiveness. Innovation, which tends to get stifled in a hierarchical structure, can flourish in this more flexible structure.

OrganizeForComplexityMore recently, on Harold Jarche‘s recommendation, I read Niels Pflaeging’s  Organize for Complexity, a book also on how to create organizations that are high performance.  While I think the argument was a bit sketchy  (to be fair, it’s deliberately graphic and lean), I was sold on the outcomes, and one of them is ‘cells’ composed of a small group of diverse individuals accomplishing a business outcome.  He makes clear that this is not departments in a hierarchy, but  flat communication between cross-functional teams.

And, finally, Stan McChrystal has a book out called Team of Teams,  that builds upon the concepts he presented as a keynote I mindmapped previously. This emerged from  how the military had to learn to cope with rapid changes in tactics.  Here again, the same concept of small groups working with a clear mission and freedom to pursue emerges.

This also aligns  well with the results implied by Dan Pink’s Drive, where he suggests that the three critical elements for performance are to provide people with important goals, the freedom to pursue them, and support to succeed. Small teams fit well within what’s known about the best in getting the best ideas and solutions out of people, such as brainstorming.

These are nuances on top of Jon Husband’s Wirearchy, where we have some proposed structure around the connections.  It’s clear that to become adaptive, we need to strengthen connections and decrease structure (interestingly, this also reflects the organizational equivalents of nature’s extremophiles).  It’s about trust and purpose and collaboration and more.  And, of course, to create a culture where learning is truly welcomed.

Interesting that out of responding to societal changes, organizational work, and military needs, we see a repeated pattern.  As such, I think it’s worth taking notice.   And there are clear L&D implications, I reckon. What say you?

#itashare

Buy this…for your boss

14 October 2015 by Clark 4 Comments

So I’ve been pushing an L&D  Revolution, and for good reasons.  I truly believe that L&D is on a path to extinction because: “it isn’t doing near what it could and should, and what it is doing, it is doing badly, otherwise it’s fine” (as my mantra would have it).  So many bad practices –  info-dump and knowledge-test classes, no alternative to courses, lack of measuring impact – mean that  L&D  is out of touch with the information age.  And  what with everyone being able to access the web, content creation tools, and social media environments, wherever and whenever they are, people can survive and thrive without  what L&D does, and are doing so.

What I’ve argued is that we need to align with how we really think, work, and learn, and bring that to the organization. What L&D  could be doing – providing a rich performance ecosystem that not only empowers optimal execution, but foster the necessary continual innovation – is a truly deep contribution to the success of the organization.

I feel so strongly that I wrote a book about it.    If you’ve read it, you know it documents the problems, provides framing concepts, is illustrated with examples, and promotes a roadmap forward (if you’ve read and liked it, I’d love  an Amazon review!).  And while it’s both selling reasonably well (as far as I can tell, the information from my publisher is impenetrable ;) and leading to speaking opportunities, I fear it’s not getting  to the right people.  Frankly,  most of my speaking and writing has been at the practitioner and manager level, and this is really for the director,  and up!  All the way to the C-suite, potentially. And while I make an effort to get this idea into their vision, there’s a lot of competition, because  everyone wants the C-suite’s attention.

The point I want to make is that the real audience for this book is your boss (unless you’re the CEO, of course ;).  And I’m not saying this to sell books (I’m unlikely to make more than enough to buy a couple of cups of coffee off the proceeds, given book contracts), but because I think the message is so important!

So, let me implore you to consider somehow getting the revolution in front of your boss, or your grandboss, and up.  It doesn’t have to be the book, but the concept really needs to be understood if the organization is going to remain competitive.  All evidence points to the fact that organizations have to become more agile, and that’s a role L&D is in a prime position to facilitate.  If, however (and that’s a big if), they get the bigger picture.  And that’s the message I’m trying to spread in all the ways I can see.  I welcome your thoughts, and your assistance even more.

Supporting our Brains

13 October 2015 by Clark 5 Comments

One of the ways I’ve been thinking about the role mobile can play in design is thinking about how our brains work, and don’t.  It came out of both mobile and the recent cognitive science for learning workshop I gave at the recent DevLearn.  This applies more broadly to performance support in general, so I though I’d share where my thinking is going.

To begin with, our cognitive architecture is demonstrably awesome; just look at your surroundings and recognize your clothing, housing, technology, and more are the product of human ingenuity.  We have formidable capabilities to predict, plan, and work together to accomplish significant goals.  On the flip side, there’s no one all-singing, all-dancing architecture out there (yet) and every such approach also has weak points. Technology, for instance, is bad at pattern-matching and meaning-making, two things we’re really pretty good at.  On the flip side, we have some flaws too. So what I’ve done here is to outline the flaws, and how we’ve created tools to get around those limitations.  And to me, these are principles for design:

table of cognitive limitations and support toolsSo, for instance, our senses capture incoming signals in a sensory store.  Which has interesting properties that it has almost an unlimited capacity, but for only a very short time. And there is no way all of it can get into our working memory, so what happens is that what we attend to is what we have access to.  So we can’t recall what we perceive accurately.  However, technology (camera, microphone, sensors) can recall it all perfectly. So making capture capabilities available is a powerful support.

Similar, our attention is limited, and so if we’re focused in one place, we may forget or miss something else.  However, we can program reminders or notifications that help us recall important events that we don’t want to miss, or draw our attention where needed.

The limits on working memory (you may have heard of the famous 7 ±2, which really is <5) mean we can’t hold too much in our brains at once, such as interim results of complex calculations.  However, we can have calculators that can do such processing for us. We also have limited ability to carry information around for the same reasons, but we can create external representations (such as notes or  scribbles) that can hold those thoughts for us.  Spreadsheets, outlines, and diagramming tools allow us to take our interim thoughts and record them for further processing.

We also have trouble remembering things accurately. Our long term memory tends to remember meaning, not particular details. However, technology can remember arbitrary and abstract information completely. What we need are ways to look up that information, or search for it. Portals and lookup tables trump trying to put that information into our heads.

We also have a tendency to skip steps. We have some randomness in our architecture (a benefit: if we sometimes do it differently, and occasionally that’s better, we have a learning opportunity), but this means that we don’t execute perfectly.  However, we can use process supports like checklists.  Atul Gawande wrote a fabulous book on the topic that I can recommend.

Other phenomena include that previous experience can bias us in particular directions, but we can put in place supports to provide lateral prompts. We can also prematurely evaluate a solution rather than checking to verify it’s the best. Data can be used to help us be aware.  And we can trust our intuition too much and we can wear down, so we don’t always make the best decisions.  Templates, for example are a tool that can help us focus on the important elements.

This is just the result of several iterations, and I think more is needed (e.g. about data to prevent premature convergence), but to me it’s an interesting alternate approach to consider where and how we might support people, particularly in situations that are new and as yet untested.  So what do you think?

Mobile Time

6 October 2015 by Clark 1 Comment

At the recent DevLearn conference, David Kelly spoke about his experiences with the Apple Watch.  Because I don’t have one yet, I was interested in his reflections.  There were a number of things, but what came through for me (and other reviews I’ve read) is that the time scale is a factor.

Now, first, I don’t have one because as with technology in general, I don’t typically acquire anything in particular until I know how it’s going to make me more effective.  I may have told this story before, but for instance I didn’t wasn’t interested in acquiring an iPad when they were first announced (“I’m not a content consumer“). By the time they were available, however, I’d heard enough about how it would make me more productive (as a content  creator), that I got one the first day it was available.

So too with the watch. I don’t get a lot of notifications, so that isn’t a real benefit.   The ability to be navigated subtly around towns sounds nice, and to check on certain things.  Overall, however, I haven’t really found the tipping-point use-case.  However, one thing he said triggered a thought.

He was talking about how it had reduced the amount of times he accessed his phone, and I’d heard that from others, but here it struck a different cord. It made me realize it’s about time frames. I’m trying to make useful conceptual distinctions between devices to try to help designers figure out the best match of capability to need. So I came up with what seemed an interesting way to look at it.

Various usage times by category: wearable, pocketable, bag able.This is similar to the way I’d seen Palm talk about the difference between laptops and mobile, I was thinking about the time you spent in using your devices.  The watch (a wearable)  is accessed quickly for small bits of information.  A pocketable (e.g. a phone) is used for a number of seconds up to a few minutes.  And a tablet tends to get accessed for longer uses (a laptop doesn’t count).  Folks may well have all 3, but they use them for different things.

Sure, there are variations, (you  can watch a movie on a phone, for instance; phone calls could be considerably longer), but by and large I suspect that the time of access you need will be a determining factor (it’s also tied to both battery life and screen size). Another way to look at it would be the amount of information you need to make a decision about what to do, e.g.  for cognitive work.

Not sure this is useful, but it was a reflection and I  do like to share those. I welcome your feedback!

Revolution Roadmap: Assess

23 September 2015 by Clark 3 Comments

Last week, I wrote about a process to follow in moving forward on the L&D  Revolution. The first step is  Assess,  and I’ve been thinking about what that means.   So here, let me lay out some preliminary thoughts.

The first level are the broad categories.  As I’m talking about aligning with how we think, work, and learn, those are the three top areas where I feel we fail to recognize what’s known about cognition, individually and together. As I mentioned yesterday, I’m looking at how we use technology to facilitate productivity in ways specifically focused on helping people learn. But let me be clear, here I’m talking about the big picture of learning – problem-solving, design, research, innovation, etc – as they call fall under the category of things we don’t know the answer to when we begin.

I started with how we think. Too often we don’t put information in the world when we can, yet we know that all our thinking isn’t in our head.  So we can ask :

  • Are you using performance consulting?
  • Are you taking responsibility for resource development?
  • Are you ensuring  the information architecture for resources  is  user-focused?

The next area is working, and here the revelation is that the best outcomes come from people working together.  Creative friction, when done in consonance with how we work together best, is where the best solutions and the best new ideas will come from. So you can look at:

  • Are people communicating?
  • Are people collaborating?
  • Do you  have in place a learning culture?

Finally,  with learning, as the area most familiar to L&D, we need to look at whether we’re applying what’s known about making learning work.  We should start with Serious eLearning, but we can go farther.  Things to look at include:

  • Are  you practicing  deeper learning design?
  • Are you  designing  engagement into learning?
  • Are you developing  meta-learning?

In addition to each of these areas, there are cross-category issues.  Things to look at for each include:

  • Do you have infrastructure?
  • What are you measuring?

All of these areas have nuances underneath, but at the top level these strike me as the core categories of questions.  This is working down to  a finer grain than I  looked at in the book (c.f. Figure 8.1), though that was a good start at evaluating where one is.

I’m convinced that the first step for change is to understand where you are (before the next step, Learn, about where you could be).  I’ve yet to see many organizations that are in full swing here, and I have persistently made the case that the status quo isn’t sufficient.  So, are you ready to take the first step to assess where you are?

 

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.