Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Biz tech

22 September 2015 by Clark 2 Comments

One of my arguments for the L&D revolution is the role that L&D  could be playing.  I believe that if L&D were truly enabling optimal execution as well as facilitating continual innovation (read: learning), then they’d be as critical to the organization as IT. And that made me think about how this role would differ.

To be sure, IT is critical.  In today’s business, we track our business, do our modeling, run operations, and more with IT.  There is  plenty of vertical-specific software, from  product design to transaction tracking, and of course more general business software such as document generation, financials, etc.  So how does L&D be as ubiquitous as other software?  Several ways.

First, formal learning software is really enterprise-wide.  Whether it’s simulations/scenarios/serious games, spaced learning delivered via mobile, or user-generated content (note: I’m deliberately avoiding the LMS and courses ;), these things should play a role in preparing the audience to optimally execute and being accessed by a large proportion of the audience.  And that’s not including our tools to develop same.

Similarly, our performance support solutions – portals housing job aids and context-sensitive support – should be broadly distributed.  Yes, IT may own the portals, but in most cases they are not to be trusted to do a user- and usage-centered solution.  L&D should be involved in ensuring that the solutions both articulate with and reflect the formal learning, and are organized by user need not business silo.

And of course the social network software – profiles and locators as well as communication and collaboration tools – should be under the purview of L&D. Again, IT may own them or maintain them, but the facilitation of their use, the understanding of the different roles and ensuring they’re being used efficiently, is a role for L&D.

My point here is that there is an enterprise-wide category of software, supporting learning in the big sense (including problem-solving, research, design, innovation), that should be under the oversight of L&D.  And this is the way in which L&D becomes more critical to the enterprise.  That it’s not just about taking people away from work and doing things to them before sending them back, but facilitating productive engagement and interaction throughout the workflow.  At least at the places where they’re stepping outside of the known solutions, and that is increasingly going to be the case.

Agile?

17 September 2015 by Clark 6 Comments

Last Friday’s #GuildChat was on Agile Development.  The topic is interesting to me, because like with Design Thinking, it seems like well-known practices with a new branding. So as I did then, I’ll lay out what I see and hope others will enlighten me.

As context, during grad school I was in a research group focused on user-centered system design, which included design, processes, and more. I subsequently taught interface design (aka Human Computer Interaction or HCI) for a number of years (while continuing to research learning technology), and made a practice of advocating the best practices from HCI to the ed tech community.  What was current at the time were iterative, situated, collaborative, and participatory design processes, so I was pretty  familiar with the principles and a fan. That is, really understand the context, design and test frequently, working in teams with your customers.

Fast forward a couple of decades, and the Agile Manifesto puts a stake in the ground for software engineering. And we see a focus on releasable code, but again with principles of iteration and testing, team work, and tight customer involvement.  Michael Allen was enthused enough to use it as a spark that led to the Serious eLearning Manifesto.

That inspiration has clearly (and finally) now moved to learning design. Whether it’s Allen’s  SAM  or Ger Driesen’s  Agile Learning Manifesto, we’re seeing a call for rethinking the old waterfall model of design.  And this is a good thing (only decades late ;).  Certainly we know that working together is better than working alone (if you manage the process right ;), so the collaboration part is a win.

And we certainly need change.  The existing approaches we too often see involve a designer being given some documents, access to a SME (if lucky), and told to create a course on X.  Sure, there’re tools and templates, but they are focused on making particular interactions easier, not on ensuring better learning design. And the person works alone and does the design and development in one pass. There are likely to be review checkpoints, but there’s little testing.  There are variations on this, including perhaps an initial collaboration meeting, some SME review, or a storyboard before development commences, but too often it’s largely an independent one way flow, and  this isn’t good.

The underlying issue  is that waterfall models, where you specify the requirements in advance and then design, develop, and implement just don’t work. The problem is that the human brain is pretty much the most complex thing in existence, and when we determine a priori what will work, we don’t take into account the fact that like Heisenberg what we implement will change the system. Iterative development and testing allows the specs to change after initial experience.  Several issues arise with this, however.

For one, there’s a question about what is the right size and scope of a deliverable.  Learning experiences, while typically overwritten, do have some stricture that keeps them from having intermediately useful results. I was curious about what made sense, though to me it seemed that you could develop your final practice first as a deliverable, and then fill in with the required earlier practice, and content resources, and this seemed similar to what was offered up during the chat to my question.

The other one is scoping and budgeting the process. I often ask, when talking about game design, how to know when to stop iterating. The usual (and wrong answer) is when you run out of time or money. The  right answer would be when you’ve hit your metrics, the ones you should set before you begin that determine the parameters of a solution (and they can be consciously reconsidered as part of the process).  The typical answer, particularly for those concerned with controlling costs, is something like a heuristic choice of 3 iterations.  Drawing on some other work in software process, I’d recommend creating estimates, but then reviewing them after. In the software case, people got much better at estimates, and that could be a valuable extension.  But it shouldn’t be any more difficult to estimate, certainly with some experience, than existing methods.

Ok, so I may be a bit jaded about new brandings on what should already be good practice, but I think anything that helps us focus on developing in ways that lead to quality outcomes is a good thing.  I encourage you to work more collaboratively, develop and test more iteratively, and work on discrete chunks. Your stakeholders should  be glad you did.

 

ALIGN!

15 September 2015 by Clark 7 Comments

I’m recognizing that there’s an opportunity to provide more support for implementing the Revolution. So I’ve been thinking through what sort of process might be a way to go about making progress. Given that the core focus in on aligning with how we think, work, and learn (elements we’re largely missing), I thought I’d see whether that could provide a framework. Here’s my first stab,  for your consideration:

Assess: here we determine our situation. I’m working on an evaluation instrument that covers the areas and serves as a guide to any gaps between current status and possible futures, but the key element is to ascertain where we are.

Learn: this step is about reviewing the conceptual frameworks available, e.g. our understandings of how we think, work and learn. The goal is to identify possible directions to move in detail and to prioritize them. The ultimate outcome is our next step to take, though we may well have a sequence queued up.

Initiate: after choosing a step, here’s where we launch it. This may not be a major initiative.  The principle of ‘trojan mice‘ suggests small focused steps, and there are reasons to think small steps make sense.   We’ll need to follow the elements of successful change, with planning, communicating, supporting, rewarding, etc.

Guide: then we need to assess how we’re doing and look for interventions needed. This involves knowing what the change should accomplish, evaluating  to see if it’s occurring, and implementing refinements as we go.  We shouldn’t assume it will go well, but instead check and support.

Nurture: once we’ve achieved a stable state, we want to nurture it on an ongoing basis. This may be documenting and celebrating the outcome, replicating  elsewhere, ensuring persistence and continuity, and returning to see where we are now and where we should go next.

Obviously, I’m pushing the ALIGN acronym (as one does), as it helps reinforce the message.   Now to put in place tools to support each step.  Feedback solicited!

Culture Before Strategy

9 September 2015 by Clark Leave a Comment

In an insightful article, Ken Majer (full disclosure, a boss of mine many years ago) has written about the need to have the right culture before executing strategy.  And this strikes me as a valuable contribution to thinking about effective change in the transformation of L&D in the Revolution.

I have argued that you can get some benefits from the Revolution without having an optimized culture, but you’re not going to tap into the full potential. Revising formal learning to be truly effective by aligning to how we learn, adding in performance support in ways that augment our cognitive limitations, etc, are all going to offer useful outcomes. I think the optimal execution stuff will benefit, but  the ability to truly tap into the network for the continual innovation requires making it safe and meaningful to share. If it’s not safe to Show Your Work, you can’t capitalize on the benefits.

What Ken is talking about here is ensuring you have values and culture in alignment with the vision and mission.  And I’ll go further and say that in the long term, those values have to be about valuing people and the culture has to be about working and learning together effectively.  I think that’s the ultimate goal when you really want to succeed:  we know that people perform best when given meaningful work and are empowered to pursue it.

It’s not easy, for sure.  You need to get explicit about your values and how those manifest in how you work. You’ll likely find that some of the implicit values are a barrier, and they’ll require conscious work to address. The change in approach on the part of management and executives and the organizational restructuring that can accompany this new way of working isn’t going to happen overnight, and change is hard.  But  it is increasingly, and will  be, a business necessity.

So too for the move to a new L&D. You can start working in these ways within your organization, and grow it.  And you should. It’s part of the path, the roadmap, to the revolution.  I’m working on more bits of it, trying to pull it together more concretely, but it’s clear to me that one thread (and as already indicated in the diagrams that accompany the book) is indeed a path to a more enabling culture. In the long term, it will be uplifting, and it’s worth getting started on now.

Accreditation and Compliance Craziness

8 September 2015 by Clark 3 Comments

A continued bane of my existence is the ongoing requirements that are put in place for a variety of things.  Two in particular are related and worth noting: accreditation and compliance.  The way they’re typically construed is barking mad, and we can (and need to) do better.

To start with accreditation. It sounds like a good thing: to make sure that someone issuing some sort of certification has in place the proper procedures.  And, done rightly, it would be. However, what we currently see is that, basically, the body says you have to take what the Subject Matter Expert (SME) says as the gospel. And this is problematic.

The root of the problem is that SMEs don’t have access to around 70% of what they do, as research at the University of Southern California’s Cognitive Technology group has documented. However, of course, they have access to all they ‘know’.  So it’s easy for them to say what learners should know, but not what learners  actually should be able to do.  And some experts are better than others at articulating this, but the process is opaque to this nuance.

So unless the certification process is willing to allow the issuing institution the flexibility to use a process to drill down into the actual ‘do’, you’re going to get knowledge-focused courses that don’t actually achieve important outcomes. You could do things like  incorporating those who depend on the practitioners, and/or using a replicable and grounded process with SMEs that helps them work out what the core objectives need to be; meaningful ones, ala competencies. And a shoutout to Western Governors University for somehow being accredited using competencies!

Compliance is, arguably, worse.  Somehow, the amount of  time you spend is the important determining factor. Not what you can do at the end, but instead that you’ve done  something  for an hour.  The notion that amount of time spent relates to ability at this level of granularity is outright maniacal.  Time would matter, differently for different folks, but you have to be doing the right thing, and there’s no stricture for that.   Instead, if you’ve been subjected to an hour of information, that somehow is going to change your behavior. As if.

Again, competencies would make sense.  Determine what you need them to be able to do, and then assess that. If it takes them 30 minutes, that’s OK. If it takes them 5 hours, well, it’s necessary to be compliant.

I’d like to be wrong, but I’ve seen personal instances of both of these, working with clients. I’d really like to find a point of leverage to address this.  How can we start having processes that obtain necessary skills, and then use those to determine ability, not time or arbitrary authority!  Where can we start to make this necessary change?

Community of improvement?

1 September 2015 by Clark 1 Comment

In a conversation I had recently, specifically about a community focused on research, I used the term ‘community of improvement’, and was asked how that was different than a community of practice. It caused me to think through what the differences might be.  (BTW, the idea was sparked by conversations with Lucian Tarnowski from BraveNew.)

First, let me say that a community of practice  could be, and should be, a community of improvement. One of the principles of practice is reflection and improvement.  But that’s not necessarily the case.  A community of practice could just be a place where people answer each other’s questions, collaborate on tasks, and help one another with issues  not specifically aligned with the community.  But there should be more.

What I suggested in the conversation was that a  community  should  also be about documenting practice,  applying that practice through action or design research, and reflecting on the outcomes and the implications for practice.  The community should be looking to other fields for inspiration, and attempting experiments. It’s the community equivalent of Schön’s reflective practitioner.  And it’s  more than just cooperation  or collaboration, but actively engaging and working to improve.

Basically, this requires collaboration tools, not just communication tools. It requires: places to share thoughts; ways to find partners  on the documentation, experimentation, and reflection; and support to track and share the resulting changes on community practices.

Yes, obviously a real community of practice should be doing this, but too often I see community tools without the collaboration tools. So I think it’s worth being explicit about what we would hope will accompany the outcomes.  So, where do we do this, and how?

#itashare

Aligning

25 August 2015 by Clark 2 Comments

I’m realizing that a major theme of my work and the revolution is that what we do in organizations, and what we do as L&D practitioners, is not aligned with how we think, work, and learn.  And to that extent, we’re doomed to failure. We can, and need to, do better.

Let’s start with thinking. The major mismatch here is that our thinking is done rationally and in our head. Results in cognitive science show, instead, that much of our thinking is irrational and is distributed across the world. We use external representations and tools, and unless we’re experts, we make decisions and use our brains to justify them rather than actually do the hard work.

What does this mean for organizations and L&D?  It means we should be looking to augment how we think, with tools and processes like performance support, helping us find information with powerful search.  We want to have open book learning, since we’ll use the book in the real world, and we want to avoid putting it ‘in the head’ as much as possible. Particularly rote information. We should expect errors, and provide support with checklists, not naively expect that people can perform like robots.

This carries over to how we work.  The old view is that we work alone, performing our task, and being managed from above with one person thinking for a number of folks.  What we now know, however, is that this view isn’t optimal. The output is better when we get multiple complementary minds working together.  Adaptation and innovation work best when we work together.

So we don’t need isolation to do our work, we need cooperation  and collaboration.  We need ways to work together. We need to give people meaningful tasks and give them space to execute, with appropriate support. We need to create environments where it’s safe to share, to show your work, to work out loud.

And our models of learning are broken. The trend to  an   event comprised of information dump and knowledge test we know doesn’t work. Rote procedures are no longer sufficient for the increasing ambiguity and unique situations our learners are seeing. And the notion that  “practice ’til they get it right” will lead to any meaningful change in ability is fundamentally flawed.

To learn, we need models to guide our behavior and help us adapt.   We need to identify and address misconceptions. We need learners to engage concretely and be scaffolded in reflection.     And we need  much practice.  Our learning experiences need to look much more like scenarios and serious games, not like text and next.

We’re in an information age, and industrial models just won’t cut it.  I’m finding that we’re hampered by a fundamental lack of awareness of our brains, and this is manifesting in too many unfortunate and ineffective practices.  We need to get better. We know better paths, and we need to trod them.  Let’s start acting like professionals and develop the expertise we need to do the job we must do.

#itashare

Where in the world is…

18 August 2015 by Clark Leave a Comment

It’s time for another game of Where’s Clark?  As usual, I’ll be somewhat peripatetic this fall, but more broadly scoped than usual:

  • First I’ll be hitting Shenzhen, China at the end of August  to talk advanced mlearning  for a private event.
  • Then I’ll be hitting the always excellent  DevLearn  in Las Vegas at the end of September to run a workshop on learning science for design (you  should want to attend!) and give a session on content engineering.
  • At the end of October I’m down under  at the Learning@Work event in Sydney to talk the Revolution.
  • At the beginning of November I’ll be at LearnTech Asia in Singapore, with an impressive lineup of fellow speakers to again sing the praises of reforming L&D.
  • That might seem like enough, but I’ll also be at Online Educa in Berlin at the beginning of December running an mlearning for academia workshop and seeing my ITA colleagues.

Yes, it’s quite the whirl, but with this itinerary I should be somewhere near you almost anywhere you are in the world. (Or engage me to show up at your locale!) I hope to see  you at one event or another  before the year is out.

 

Designing Learning Like Professionals

12 August 2015 by Clark 4 Comments

I’m increasingly realizing that the ways we design and develop content are part of the reason why we’re not getting the respect we deserve.  Our brains are arguably the most complex things in the known universe, yet we don’t treat our discipline as the science it is.  We need to start combining experience design with learning engineering to really start delivering solutions.

To truly design learning, we need to understand learning science.  And this does  not mean paying attention to so-called ‘brain science’. There is legitimate brain science (c.f. Medina, Willingham), and then there’s a lot of smoke.

For instance, there’re sound cognitive reasons why information dump and knowledge test won’t lead to learning.  Information that’s not applied doesn’t stick, and application that’s not sufficient doesn’t stick. And it won’t transfer well if you don’t have appropriate contexts across examples and practice.  The list goes on.

What it takes is understanding our brains: the different components, the processes, how learning proceeds, and what interferes.  And we need to look at the right levels; lots of neuroscience is  not relevant at the higher level where our thinking happens.  And much about that is still under debate (just google ‘consciousness‘ :).

What we do have are robust theories about learning that pretty comprehensively integrate the empirical data.  More importantly, we have lots of ‘take home’ lessons about what does, and doesn’t work.  But just following a template isn’t sufficient.  There are gaps where have to use our best inferences based upon models to fill in.

The point I’m trying to make is that we have to stop treating designing learning as something anyone can do.  The notion that we can have tools that make it so anyone can design learning has to be squelched. We need to go back to taking pride in our work, and designing learning that matches how our brains work. Otherwise, we are guilty of malpractice. So please,  please, start designing in coherence with what we know about how people learn.

If you’re interested in learning more, I’ll be running a learning science for design workshop at DevLearn, and would love to see you there.

Content engineering

11 August 2015 by Clark 2 Comments

We’ve heard about learning engineering and  while the focus is on experience design, the pragmatics include designing content to create the context, resources, and motivation for the activity.  And it’s time we step beyond just hardwiring this content together, and start treating it as professionals.

Look at business websites these days. You can customize the content you’re searching for with filters.  The content reacts to the device you’re on and displays appropriately.  There can even be content that is specific to your particular trace of action through the site and previous visits.  Just look at Amazon or Netflix recommendations!

This doesn’t happen by hardwired sites anymore.  If you look at the conferences around content, you’ll find that they’re talking industrial strength solutions.  They use content management systems, carefully articulated with tight definitions and associated tags, and rules that pull together those content elements by definition into the resulting site.  This is content engineering, and it’s a direction we need to go.

What’s involved is tighter templates around content roles, metadata describing the content, and management of the content. You write into the system, describe it, and pull it out by description, not by hard link. This allows flexibility and rules that can pull differentially by different contexts: different people, different role, different need, and  different device. We also separate out what it says from how it looks, using tags to support rendering appropriately on different devices rather than hard-coding the appearance as well as the content and the assembly.

This is additional work, but the reasons are several.  First, being tighter around content definitions provides a greater opportunity to be scientific about the role the content plays. We’re too lax in our content, so that beyond a good objective, we don’t specify what makes a good example, etc.   Second, by using a system to maintain that content, we can get more rigorous in content management.  I regularly ask audiences whether they have outdated legacy content hanging around, and pretty much everyone agrees. This isn’t effective content governance, and content should have regular cycles of review and expiry dates.

By this tighter process, we not only provide better content design, delivery, and management, but we set the stage for the future.  Personalization and customization, contextualization, are hampered when you have to hand-configure every option you will support. It’s much easier to write a new set of rules and then your content can serve new purposes, new business models, and more.

If you want to know more about this, I hope to see you at my session on content at DevLearn!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok