Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

5 November 2014

#DevLearn 14 Reflections

Clark @ 9:57 am

This past week I was at the always great DevLearn conference, the biggest and arguably best yet.  There were some hiccups in my attendance, as several blocks of time were taken up with various commitments both work and personal, so for instance I didn’t really get a chance to peruse the expo at all.  Yet I attended keynotes and sessions, as well as presenting, and hobnobbed with folks both familiar and new.

The keynotes were arguably even better than before, and a high bar had already been set.

Neil deGrasse Tyson was eloquent and passionate about the need for science and the lack of match between school and life.    I had a quibble about his statement that doing math teaches problem-solving, as it takes the right type of problems (and Common Core is a step in the right direction) and it takes explicit scaffolding.  Still, his message was powerful and well-communicated. He also made an unexpected connection between Women’s Liberation and the decline of school quality that I hadn’t considered.

Beau Lotto also spoke, linking how our past experience alters our perception to necessary changes in learning.  While I was familiar with the beginning point of perception (a fundamental part of cognitive science, my doctoral field), he took it in very interesting and useful direction in an engaging and inspiring way.  His take-home message: teach not how to see but how to look, was succinct and apt.

Finally, Belinda Parmar took on the challenge of women in technology, and documented how small changes can make a big difference. Given the madness of #gamergate, the discussion was a useful reminder of inequity in many fields and for many.  She left lots of time to have a meaningful discussion about the issues, a nice touch.

Owing to the commitments both personal and speaking, I didn’t get to see many sessions. I had the usual situation of  good ones, and a not-so-good one (though I admit my criteria is kind of high).  I like that the Guild balances known speakers and topics with taking some chances on both.  I also note that most of the known speakers are those folks I respect that continue to think ahead and bring new perspectives, even if in a track representing their work.  As a consequence, the overall quality is always very high.

And the associated events continue to improve.  The DemoFest was almost too big this year, so many examples that it’s hard to start looking at them as you want to be fair and see all but it’s just too monumental. Of course, the Guild had a guide that grouped them, so you could drill down into the ones you wanted to see.  The expo reception was a success as well, and the various snack breaks suited the opportunity to mingle.  I kept missing the ice cream, but perhaps that’s for the best.

I was pleased to have the biggest turnout yet for a workshop, and take the interest in elearning strategy as an indicator that the revolution is taking hold.  The attendees were faced with the breadth of things to consider across advanced ID, performance support, eCommunity, backend integration, decoupled delivery, and then were led through the process of identifying elements and steps in the strategy.  The informal feedback was that, while daunted by the scope, they were excited by the potential and recognizing the need to begin.  The fact that the Guild is holding the Learning Ecosystem conference and their release of a new and quite good white paper by Marc Rosenberg and Steve Foreman are further evidence that awareness is growing.   Marc and Steve carve up the world a little differently than I do, but we say similar things about what’s important.

I am also pleased that Mobile interest continues to grow, as evidenced by the large audience at our mobile panel, where I was joined by other mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell.  They provide nicely differing viewpoints, with Sarah representing the irreverent designer, Robert the pragmatic systems perspective, and Chad the advanced technology view, to complement my more conceptual approach.  We largely agree, but represent different ways of communicating and thinking about the topic. (Sarah and I will be joined by Nick Floro for ATD’s mLearnNow event in New Orleans next week).

I also talked about trying to change the pedagogy of elearning in the Wadhwani Foundation, the approach we’re taking and the challenges we face.  The goal I’m involved in is job skilling, and consequently there’s a real need and a real opportunity.  What I’m fighting for is to make meaningful practice as a way to achieve real outcomes.  We have some positive steps and some missteps, but I think we have the chance to have a real impact. It’s a work in progress, and fingers crossed.

So what did I learn?  The good news is that the audience is getting smarter, wanting more depth in their approaches and breadth in what they address. The bad news appears to be that the view of ‘information dump & knowledge test = learning’ is still all too prevalent. We’re making progress, but too slowly (ok, so perhaps patience isn’t my strong suit ;).  If you haven’t, please do check out the Serious eLearning Manifesto to get some guidance about what I’m talking about (with my colleagues Michael Allen, Julie Dirksen, and Will Thalheimer).  And now there’s an app for that!

If you want to get your mind around the forefront of learning technology, at least in the organizational space, DevLearn is the place to be.

 

28 October 2014

Cognitive prostheses

Clark @ 8:05 am

While our cognitive architecture has incredible capabilities (how else could we come up with advances such as Mystery Science Theater 3000?), it also has limitations. The same adaptive capabilities that let us cope with information overload in both familiar and new ways also lead to some systematic flaws. And it led me to think about the ways in which we support these limitations, as they have implications for designing solutions for our organizations.

The first limit is at the sensory level. Our mind actually processes pretty much all the visual and auditory sensory data that arrives, but it disappears pretty quickly (within milliseconds) except for what we attend to. Basically, your brain fills in the rest (which leaves open the opportunity to make mistakes). What do we do? We’ve created tools that allow us to capture things accurately: cameras and microphones with audio recording. This allows us to capture the context exactly, not as our memory reconstructs it.

A second limitation is our ‘working’ memory. We can’t hold too much in mind at one time. We ‘chunk’ information together as we learn it, and can then hold more total information at one time. Also, the format of working memory largely is ‘verbal’. Consequently, using tools like diagramming, outlines, or mindmaps add structure to our knowledge and support our ability to work on it.

Another limitation to our working memory is that it doesn’t support complex calculations, with many intermediate steps. Consequently we need ways to deal with this. External representations (as above), such as recording intermediate steps, works, but we can also build tools that offload that process, such as calculators. Wizards, or interactive dialog tools, are another form of a calculator.

Processing information in short term memory can lead to it being retained in long term memory. Here the storage is almost unlimited in time and scope, but it is hard to get in there, and isn’t remembered exactly, but instead by meaning. Consequently, models are a better learning strategy than rote learning. But external sources like the ability to look up or search for information is far better than trying to get it in the head.

Similarly, external support for when we do have to do things by rote is a good idea. So, support for process is useful and the reason why checklists have been a ubiquitous and useful way to get more accurate execution.

In execution, we have a few flaws too. We’re heavily biased to solve new problems in the ways we’ve solved previous problems (even if that’s not the best approach. We’re also likely to use tools in familiar ways and miss new ways to use tools to solve problems. There are ways to prompt lateral thinking at appropriate times, and we can both make access to such support available, and even trigger same if we’ve contextual clues.

We’re also biased to prematurely converge on an answer (intuition) rather than seek to challenge our findings. Access to data and support for capturing and invoking alternative ways of thinking are more likely to prevent such mistakes.

Overall, our use of more formal logical thinking fatigues quickly. Scaffolding help like the above decreases the likelihood of a mistake and increases the likelihood of an optimal outcome.

When you look at performance gaps, you should look to such approaches first, and look to putting information in the head last. This more closely aligns our support efforts with how our brains really think, work, and learn. This isn’t a complete list, I’m sure, but it’s a useful beginning.

24 October 2014

#DevLearn Schedule

Clark @ 8:30 am

As usual, I will be at DevLearn (in Las Vegas) this next week, and welcome meeting up with you there.  There is a lot going on.  Here’re the things I’m involved in:

  • On Tuesday, I’m running an all day workshop on eLearning Strategy. (Hint: it’s really a Revolutionize L&D workshop ;).  I’m pleasantly surprised at how many folks will be there!
  • On Wednesday at 1:15 (right after lunch), I’ll be speaking on the design approach I’m leading at the Wadhwani Foundation, where we’re trying to integrate learning science with pragmatic execution.  It’s at least partly a Serious eLearning Manifesto session.
  • On Wednesday at 2:45, I’ll be part of a panel on mlearning with my fellow mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell, chaired by conference program director David Kelly.

Of course, there’s much more. A few things I’m looking forward to:

  • The keynotes:
    •  Neil DeGrasse Tyson, a fave for his witty support of science
    • Beau Lotto talking about perception
    • Belinda Parmar talking about women in tech (a burning issue right now)
  • DemoFest, all the great examples people are bringing
  • and, of course, the networking opportunities

DevLearn is probably my favorite conference of the year: learning focused, technologically advanced, well organized, and with the right people.  If you can’t make it this year, you might want to put it on your calendar for another!

21 October 2014

Extending Mobile Models

Clark @ 8:19 am

In preparation for a presentation, I was reviewing my mobile models. You may recall I started with my 4C‘s model (Content, Compute, Communicate, & Capture), and have mapped that further onto Augmenting Formal, Performance Support, Social, & Contextual.  I’ve refined it as well, separating out contextual and social as different ways of looking at formal and performance support.  And, of course, I’ve elaborated it again, and wonder whether you think this more detailed conceptualization makes sense.

self and social mlearning contentSo, my starting point was realizing that it wasn’t just content.  That is, there’s a difference between compute and content where the interactivity was an important part of the 4C’s, so that the characteristics in the content box weren’t discriminated enough.  So the new two initial sections are mlearning content and mlearning compute, by self or social.  So, we can be getting things for an individual, or it can be something that’s socially generated or socially enabled.

mLearningComputeThe point is that content is prepared media, whether text, audio, or video.  It can be delivered or accessed as needed. Compute, interactive capability, is harder, but potentially more valuable. Here, an individual might actively practice, have mixed initiative dialogs, or even work with others or tools to develop an outcome or update some existing shared resources.

mLearningCaptureThings get more complex when we go beyond these elements.  So I had capture as one thing, and I’m beginning to think it’s two: one is the capture of current context and keeping sharing that for various purposes, and the other is the system using that context to do something unique.

To be clear here, capture is where you use the text insertion, microphone, or camera to catch unique contextual data (or user input).  It could also be other such data, such as a location, time, barometric pressure, temperature, or more. This data, then, is available to review, reflect on, or more.  It can be combinations, of course, e.g. a picture at this time and this location.

mLearningContextualNow, if the system uses this information to do something different than under other circumstances, we’re contextualizing what we do. Whether it’s because of when you are, providing specific information, or where you are, using location characteristics, this is likely to be the most valuable opportunity.   Here I’m thinking alternate reality games or augmented reality (whether it’s voiceover, visual overlays, what have you).

And I think this is device independent, e.g. it could apply to watches or glasses or..as well as phones and tablets.  It means my 4 C’s become: content, compute, capture, and contextualize.  To ponder.

So, this is a more nuanced look at the mobile opportunities, and certainly more complex as well. Does the greater detail provide greater benefit?

 

 

17 September 2014

Learning in 2024 #LRN2024

Clark @ 8:14 am

The eLearning Guild is celebrating it’s 10th year, and is using the opportunity to reflect on what learning will look like 10 years from now.  While I couldn’t participate in the twitter chat they held, I optimistically weighed in: “learning in 2024 will look like individualized personal mentoring via augmented reality, AI, and the network”.  However, I thought I would elaborate in line with a series of followup posts leveraging the #lrn2024 hashtag.  The twitter chat had a series of questions, so I’ll address them here (with a caveat that our learning really hasn’t changed, our wetware hasn’t evolved in the past decade and won’t again in the next; our support of learning is what I’m referring to here):

1. How has learning changed in the last 10 years (from the perspective of the learner)?

I reckon the learner has seen a significant move to more elearning instead of an almost complete dependence on face-to-face events.  And I reckon most learners have begun to use technology in their own ways to get answers, whether via the Google, or social networks like FaceBook and LinkedIn.  And I expect they’re seeing more media such as videos and animations, and may even be creating their own. I also expect that the elearning they’re seeing is not particularly good, nor improving, if not actually decreasing in quality.  I expect they’re seeing more info dump/knowledge test, more and more ‘click to learn more‘, more tarted-up drill-and-kill.  For which we should apologize!

2. What is the most significant change technology has made to organizational learning in the past decade?

I reckon there are two significant changes that have happened. One is rather subtle as yet, but will be profound, and that is the ability to track more activity, mine more data, and gain more insights. The ExperienceAPI coupled with analytics is a huge opportunity.  The other is the rise of social networks.  The ability to stay more tightly coupled with colleagues, sharing information and collaborating, has really become mainstream in our lives, and is going to have a big impact on our organizations.  Working ‘out loud’, showing our work, and working together is a critical inflection point in bringing learning back into the workflow in a natural way and away from the ‘event’ model.

3. What are the most significant challenges facing organizational learning today?

The most significant change is the status quo: the belief that an information oriented event model has any relationship to meaningful outcomes.  This plays out in so many ways: order-taking for courses, equating information with skills, being concerned with speed and quantity instead of quality of outcomes, not measuring the impact, the list goes on.   We’ve become self-deluded that an LMS and a rapid elearning tool means you’re doing something worthwhile, when it’s profoundly wrong.  L&D needs a revolution.

4. What technologies will have the greatest impact on learning in the next decade? Why?

The short answer is mobile.  Mobile is the catalyst for change. So many other technologies go through the hype cycle: initial over-excitement, crash, and then a gradual resurgence (c.f. virtual worlds), but mobile has been resistant for the simple reason that there’s so much value proposition.  The cognitive augmentation that digital technology provides, available whenever and wherever you are clearly has benefits, and it’s not courses!  It will naturally incorporate augmented reality with the variety of new devices we’re seeing, and be contextualized as well.  We’re seeing a richer picture of how technology can support us in being effective, and L&D can facilitate these other activities as a way to move to a more strategic and valuable role in the organization.  As above, also new tracking and analysis tools, and social networks.  I’ll add that simulations/serious games are an opportunity that is yet to really be capitalized on.  (There are reasons I wrote those books :)

5. What new skills will professionals need to develop to support learning in the future?

As I wrote (PDF), the new skills that are necessary fall into two major categories: performance consulting and interaction facilitation.  We need to not design courses until we’ve ascertained that no other approach will work, so we need to get down to the real problems. We should hope that the answer comes from the network when it can, and we should want to design performance support solutions if it can’t, and reserve courses for only when it absolutely has to be in the head. To get good outcomes from the network, it takes facilitation, and I think facilitation is a good model for promoting innovation, supporting coaching and mentoring, and helping individuals develop self-learning skills.  So the ability to get those root causes of problems, choose between solutions, and measure the impact are key for the first part, and understanding what skills are needed by the individuals (whether performers or mentors/coaches/leaders) and how to develop them are the key new additions.

6. What will learning look like in the year 2024?

Ideally, it would look like an ‘always on’ mentoring solution, so the experience is that of someone always with you to watch your performance and provide just the right guidance to help you perform in the moment and develop you over time. Learning will be layered on to your activities, and only occasionally will require some special events but mostly will be wrapped around your life in a supportive way.  Some of this will be system-delivered, and some will come from the network, but it should feel like you’re being cared for in the most efficacious way.

In closing, I note that, unfortunately,my Revolution book and the Manifesto were both driven by a sense of frustration around the lack of meaningful change in L&D. Hopefully, they’re riding or catalyzing the needed change, but in a cynical mood I might believe that things won’t change near as much as I’d hope. I also remember a talk (cleverly titled: Predict Anything but the Future :) that said that the future does tend to come as an informed basis would predict with an unexpected twist, so it’ll be interesting to discover what that twist will be.

16 September 2014

On the Road Fall 2014

Clark @ 8:05 am

Fall always seems to be a busy time, and I reckon it’s worthwhile to let you know where I’ll be in case you might be there too! Coming up are a couple of different events that you might be interested in:

September 28-30 I’ll be at the Future of Talent retreat  at the Marconi Center up the coast from San Francisco. It’s a lovely spot with a limited number of participants who will go deep on what’s coming in the Talent world. I’ll be talking up the Revolution, of course.

October 28-31 I’ll be at the eLearning Guild’s DevLearn in Las Vegas (always a great event; if you’re into elearning you should be there).  I’ll be running a Revolution workshop (I believe there are still a few spots), part of  a mobile panel, and talking about how we are going about addressing the challenges of learning design at the Wadhwani Foundation.

November 12-13 I’ll be part of the mLearnNow event in New Orleans (well, that’s what I call it, they call it LearnNow mobile blah blah blah ;).  Again, there are some slots still available.  I’m honored to be co-presenting with Sarah Gilbert and Nick Floro (with Justin Brusino pulling strings in the background), and we’re working hard to make sure it should be a really great deep dive into mlearning.  (And, New Orleans!)

There may be one more opportunity, so if anyone in Sydney wants to talk, consider Nov 21.

Hope to cross paths with you at one or more of these places!

1 July 2014

Wearable affordances

Clark @ 8:10 am

At the mLearnCon conference, it became clear it was time to write about wearables.  At the same time, David Kelly (program director for t he Guild) asked for conference reflections for the Guild Blog. Long story short, my reflections are a guest post there.

25 June 2014

Karen McGrane #mLearnCon Keynote Mindmap

Clark @ 9:54 am

Karen McGrane evangelized good content architecture (a topic near to my heart), in a witty and clear keynote. With amusing examples and quotes, she brought out just how key it is to move beyond hard wired, designed content and start working on rule-driven combinations from structured chunks. Great stuff!

20140625-095410-35650831.jpg

21 May 2014

Getting contextual

Clark @ 8:07 am

For the current ADL webinar series on mobile, I gave a presentation on contextualizing mobile in the larger picture of L&D (a natural extension of my most recent books).  And a question came up about whether I thought wearables constituted mobile.  Naturally my answer was yes, but I realized there’s a larger issue, one that gets meta as well as mobile.

So, I’ve argued that we should be looking at models for guiding our behavior.  That we should be creating them by abstracting from successful practices, we should be conceptualizing them, or adopting them from other areas.  A good model, with rich conceptual relationships, provides a basis for explaining what has happened, and predicting what will happen, giving us a basis for making decisions.  Which means they need to be as context-independent as possible.

WorkOppsSo, for instance, when I developed the mobile models I use, e.g. the 4C’s and the applications of learning (see figure), I deliberately tried to create an understanding that would transcend the rapid changes that are characterizing mobile, and make them appropriately recontextualizable.

In the case of mobile, one of the unique opportunities is contextualization.  That means using information about where you are, when you are, which way you’re looking, temperature or barometric pressure, or even your own state: blood pressure, blood sugar, galvanic skin response, or whatever else skin sensors can detect.

To put that into context (see what I did there): with desktop learning, augmenting formal could be emails that provide new examples or practice that spread out over time. With a smartphone you can do the same, but you could also have a localized information so that because of where you were you might get information related to a learning goal. With a wearable, you might get some information because of what you’re looking at (e.g. a translation or a connection to something else you know), or due to your state (too anxious, stop and wait ’til you calm down).

Similarly for performance support: with a smartphone you could take what comes through the camera and add it onto what shows on the screen; with glasses you could lay it on the visual field.  With a watch or a ring, you might have an audio narration.  And we’ve already seen how the accelerometers in fit bracelets can track your activity and put it in context for you.

Social can not only connect you to who you need to know, regardless of device or channel, but also signal you that someone’s near, detecting their face or voice, and clue you in that you’ve met this person before.  Or find someone that you should meet because you’re nearby.

All of the above are using contextual information to augment the other tasks you’re doing.  The point is that you map the technology to the need, and infer the possibilities.  Models are a better basis for elearning, too so that you teach transferable understandings (made concrete in practice) rather than specifics that can get outdated.  This is one of the elements we placed in the Serious eLearning Manifesto, of course.  They’re also useful for coaching & mentoring as well, as for problem-solving, innovating, and more.

Models are powerful tools for thinking, and good ones will support the broadest possible uses.  And that’s why I collect them, think in terms of them, create them, and most importantly, use them in my work.   I encourage you to ensure that you’re using models appropriately to guide you to new opportunities, solutions, and success.

14 April 2014

How do we mLearn?

Clark @ 6:56 am

As preface, I used to teach interface design.  My passion was still learning technology (and has been since I saw the connection as an undergraduate and designed my own major), but there’re strong links between the two fields in terms of design for humans.  My PhD advisor was a guru of interface design and the thought was “any student of his should be able to teach interface design”.  And so it turned out.  So interface design continues to be an interest of mine, and I recognize the importance. More so on mobile, where there are limitations on interface real estate, so more cleverness may be required.

Stephen Hoober, who I had the pleasure of sharing a stage with at an eLearning Guild conference, is a notable UI design expert with a speciality in mobile.  He had previously conducted a research project examining how people actually hold their phones, as opposed to anecdotes.  The Guild’s Research Director, Patti Schank, obviously thought this interesting enough to extend, because they’ve jointly published the results of the initial report and subsequent research into tablets as well. And the results are important.

The biggest result, for me, is that people tend to use phones while standing and walking, and tablets while sitting.  While you can hold a tablet with two hands and type, it’s hard.  The point is to design for supported use with a tablet,  but for handheld use with a phone. Which actually does imply different design principles.

I note that I still believe tablets to be mobile, as they can be used naturally while standing and walking, as opposed to laptops. Though you can support them, you don’t have to.  (I’m not going to let the fact that there are special harnesses you can buy to hold tablets while you stand, for applications like medical facilities dissuade me, my mind’s made up so don’t confuse me :)

The report goes into more details, about just how people hold it in their hands (one handed w/ thumb, one hand holding, one hand touching, two hands with two thumbs, etc), and the proportion of each.  This has impact on where on the screen you put information and interaction elements.

Another point is the importance of the center for information and the periphery for interaction, yet users are more accurate at the center, so you need to make your periphery targets larger and easier to hit. Seemingly obvious, but somehow obviousness doesn’t seem to hold in too much of design!

There is a wealth of other recommendations scattered throughout the report, with specifics for phones, small and large tablets, etc, as well as major takeaways.  For example the implication from the fact that tablets are often supported means that more consideration of font size is needed than you’d expect!

The report is freely available on the Guild site in the Research Library (under the Content>Research menu).  Just in time for mLearnCon!  

Next Page »

Powered by WordPress