Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

16 November 2017

#AECT17 Conference Contributions

Clark @ 8:04 AM

So, at the recent AECT 2017 conference, I participated in three ways that are worth noting.  I had the honor of participating in two sessions based upon writings I’d contributed, and one based upon my own cogitations. I thought I’d share the thinking.

For my own presentation, I shared my efforts to move ‘rapid elearning’ forward. I put Van Merrienboer’s 4 Component ID and Guy Wallace’s Lean ISD as a goal, but recognized the need for intermediate steps like Michael Allen’s SAM, David Merrill’s ‘Pebble in a Pond‘, and Cathy Moore’s Action Mapping. I suggested that these might be too far, and want steps that might be slight improvements on their existing processes. These included three thing: heuristics, tools, and collaboration. Here I was indicating specifics for each that could move from well-produced to well-designed.

In short, I suggest that while collaboration is good, many corporate situations want to minimize staff. Consequently, I suggest identifying those critical points where collaboration will be useful. Then, I suggest short cuts in processes to the full approach. So, for instance, when working with SMEs focus on decisions to keep the discussion away from unnecessary knowledge. Finally, I suggest the use of tools to support the gaps our brain architectures create.   Unfortunately, the audience was small (27 parallel sessions and at the end of the conference) so there wasn’t a lot of feedback. Still, I did have some good discussion with attendees.

Then, for one of the two participation session, the book I contributed to solicited a wide variety of position papers from respected ed tech individuals, and then solicited responses to same.  I had responded to a paper suggesting three trends in learning: a lifelong learning record system, a highly personalized learning environment, and expanded learner control of time, place and pace of instruction. To those 3 points I added two more: the integration of meta-learning skills and the breakdown of the barrier between formal learning and lifelong learning. I believe both are going to be important, the former because of the decreasing half-life of knowledge, the latter because of the ubiquity of technology.

Because the original author wasn’t present, I was paired for discussion with another author who shares my passion for engaging learning, and that was the topic of our discussion table.  The format was fun; we were distributed in pairs around tables, and attendees chose where to sit. We had an eager group who were interested in games, and my colleague and I took turns answering and commenting on each other’s comments. It was a nice combination. We talked about the processes for design, selling the concept, and more.

For the other participation session, the book was a series of monographs on important topics.  The discussion chose a subset of four topics: MOOCs, Social Media, Open Resources, and mLearning. I had written the mLearning chapter.  The chapter format included ‘take home’ lessons, and the editor wanted our presentations to focus on these. I posited the basic mindshifts necessary to take advantage of mlearning. These included five basic principles:

  1. mlearning is not just mobile elearning; mlearning is a wide variety of things.
  2. the focus should be on augmenting us, whether our formal learning, or via performance support, social, etc.
  3. the Least Assistance Principle, in focusing on the core stuff given the limited interface.
  4. leverage context, take advantage of the sensors and situation to minimize content and maximize opportunity.
  5. recognize that mobile is a platform, not a tactic or an app; once you ‘go mobile’, folks will want more.

The sessions were fun, and the feedback was valuable.

13 September 2017

Why AR

Clark @ 8:07 AM

Perhaps inspired by Apple’s focus on Augmented Reality (AR), I thought I’d take a stab at conveying the types of things that could be done to support both learning and performance. I took a sample of some of my photos and marked them up.  I’m sure there’s lots more that could be done (there were some great games), but I’m focusing on simple information that I would like to see. It’s mocked up (so the arrows are hand drawn), so understand I’m talking concept here, not execution!

Magnolia

Here, I’m starting small. This is a photo I took of a flower on a walk. This is the type of information I might want while viewing the flower through the screen (or glasses).  The system could tell me it’s a tree, not a bush, technically (thanks to my flora-wise better half).  It could also illustrate how large it is.  Finally, the view could indicate that what I’m viewing is a magnolia (which I wouldn’t have known), and show me off to the right the flower bud stage.

The point is that we can get information around the particular thing we’re viewing. I might not actually care about the flower bud, so that might be filtered out, and it might instead talk about any medicinal uses.  Also, it could be dynamic, animating the process of going from bud to flower and falling off. It could also talk about the types of animals (bees, hummingbirds, ?) that interact with it, and how. It would be dependent on what want to learn.  And, perhaps, with some additional incidental information on the periphery of my interests, for serendipity.

Neighborhood viewGoing wider, here I’m looking out at a landscape, and the overlay is providing directions. Downtown is straight ahead, my house is over that ridge, and infamous Mt. Diablo is off to the left of the picture. It could do more, pointing out that the green ridges are grapes, provide the name of the neighborhood that’s in the foreground (I call it Stepford Downs, after the movie ;).

Dynamically, of course, if I moved the camera to the left, Mt. Diablo would get identified when it sprung into view.  As we moved around, we’d point to the neighboring towns in view, and in the direction of further towns blocked by mountain ranges.  We should or could also identify the river flowing past to the north.  And we could instead focus on other information: infrastructure (pipes and electricity), government boundaries, whatever’s relevant could be filtered in or out.

Road picAnd in this final example, taken from the car on a trip, AR might indicate some natural features. Here I’ve pointed to the clouds (and indicated the likelihood of rain). Similarly, I’ve identified the rock and the mechanism of shaping. (These are all made up, they could be wrong; Mt Faux definitely is!)  We might even be able to touch on a label and have it expand.

Similarly, as we moved, information would change as we viewed different areas. We might even animate what the area looked like hundreds of thousands of years ago and how it’s changed.  Or we could illustrate coming changes. It could instead show boundaries of counties or parks, types of animals, or other relevant information.

The point here is that annotating the world, a capability AR has, can be an amazing learning tool. If I can specify my interests, we can capitalize on them to develop. And this is as an adult. Think about doing this for kids, layering on information in their Zone of Proximal Development and interests!  I know VR’s cool, and has real learning potential, but there you have to create the context. Here we’re taking advantage of it. That may be harder, but it’s going to have some real upsides when it can be done ubiquitously.

20 July 2017

Augmented Reality Lives!

Clark @ 8:07 AM

Visually Augmented RealityAugmented Reality (AR) is on the upswing, and I think this is a good thing. I think AR makes sense, and it’s nice to see both solid tool support and real use cases emerging.  Here’s the news, but first, a brief overview of why I like AR.

As I’ve noted before, our brains are powerful, but flawed.  As with any architecture, any one choice will end up with tradeoffs. And we’ve traded off detail for pattern-matching.  And, technology is the opposite: it’s hard to get technology to do pattern matching, but it’s really good at rote. Together, they’re even more powerful. The goal is to most appropriately augment our intellect with technology to create a symbiosis where the whole is greater than the sum of the parts.

Which is why I like AR: it’s about annotating the world with information, which augments it to our benefit.  It’s contextual, that is, doing things because of when and where we are.  AR augments sensorily, either auditory or visual (or kinesthetic, e.g. vibration).  Auditory and kinesthetic annotation is relatively easy; devices generate sounds or vibrations (think GPS: “turn left here”).  Non-coordinated visual information, information that’s not overlaid visually, is presented as either graphics or text (think Yelp: maps and distances to nearby options).  Tools already exist to do this, e.g. ARIS.  However, arguably the most compelling and interesting is the aligned visuals.

Google Glass was a really interesting experiment, and it’s back.  The devices – glasses with camera and projector that can present information on the glass – were available, but didn’t do much because of where you were looking. There were generic heads-up displays and camera, but little alignment between what was seen and what was consequently presented to the user with additional information.  That’s changed. Google Glass has a new Enterprise Edition, and it’s being used to meet real needs and generate real outcomes. Glasses are supporting accurate placement in manufacturing situations requiring careful placement.  The necessary components and steps are being highlighted on screen, and reducing errors and speeding up outcomes.

And Apple has released it’s Augmented Reality software toolkit, ARKit, with features to make AR easy.  One interesting aspect is built-in machine learning, which could make aligning with objects in the world easy!  Incompatible platforms and standards impede progress, but with Google and Apple creating tools for each of their platforms, development can be accelerated. (I hope to find out more at the eLearning Guild’s Realities 360 conference.)

While I think Virtual Reality (VR) has an important role to play for deep learning, I think contextual support can be a great support for extending learning (particularly personalization), as well as performance support.  That’s why I’m excited about AR. My vision has been that we’ll have a personal coaching system that will know where and when we are and what our goals are, and be able to facilitate our learning and success. Tools like these will make it easier than ever.

27 June 2017

FocusOn Learning reflections

Clark @ 8:08 AM

If you follow this blog (and you should :), it was pretty obvious that I was at the FocusOn Learning conference in San Diego last week (previous 2 posts were mindmaps of the keynotes). And it was fun as always.  Here are my reflections on what happened a bit more, as an exercise in meta-learning.

There were three themes to the conference: mobile, games, and video.  I’m pretty active in the first two (two books on the former, one on the latter), and the last is related to things I care and talk about.  The focus led to some interesting outcomes: some folks were very interested in just one of the topics, while others were looking a bit more broadly.  Whether that’s good or not depends on your perspective, I guess.

Mobile was present, happily, and continues to evolve.  People are still talking about courses on a phone, but more folks were talking about extending the learning.  Some of it was pretty dumb – just content or flash cards as learning augmentation – but there were interesting applications. Importantly, there was a growing awareness about performance support as a sensible approach.  It’s nice to see the field mature.

For games, there were positive and negative signs.  The good news is that games are being more fully understood in terms of their role in learning, e.g. deep practice.  The bad news is that there’s still a lot of interest in gamification without a concomitant awareness of the important distinctions. Tarting up drill-and-kill with PBL (points, badges, and leaderboards; the new acronym apparently) isn’t worth significant interest!  We know how to drill things that must be, but our focus should be on intrinsic interest.

As a side note, the demise of Flash has left us without a good game development environment. Flash is both a development environment and a delivery platform. As a development environment Flash had a low learning threshold, and yet could be used to build complex games.  As a delivery platform, however, it’s woefully insecure (so much so that it’s been proscribed in most browsers). The fact that Adobe couldn’t be bothered to generate acceptable HTML5 out of the development environment, and let it languish, leaves the market open for another accessible tool. And Unity or Unreal provide good support (as I understand it), but still require coding.  So we’re not at an easily accessible place. Oh, for HyperCard!

Most of the video interest was either in technical issues (how to get quality and/or on the cheap), but a lot of interest was also in interactive video. I think branching video is a real powerful learning environment for contextualized decision making.  As a consequence the advent of tools that make it easier is to be lauded. An interesting session with the wise Joe Ganci (@elearningjoe) and a GoAnimate guy talked about when to use video versus animation, which largely seemed to reflect my view (confirmation bias ;) that it’s about whether you want more context (video) or concept (animation). Of course, it was also about the cost of production and the need for fidelity (video more than animation in both cases).

There was a lot of interest in VR, which crossed over between video and games.  Which is interesting because it’s not inherently tied to games or video!  In short, it’s a delivery technology.  You can do branching scenarios, full game engine delivery, or just video in VR. The visuals can be generated as video or from digital models. There was some awareness, e.g. fun was made of the idea of presenting powerpoint in VR (just like 2nd Life ;).

I did an ecosystem presentation that contextualized all three (video, games, mobile) in the bigger picture, and also drew upon their cognitive and then L&D roles. I also deconstructed the game Fluxx (a really fun game with an interesting ‘twist’). Overall, it was a good conference (and nice to be in San Diego, one of my ‘homes’).

23 May 2017

Some new elearning companies ;)

Clark @ 8:03 AM

As I continue to track what’s happening, I get the opportunity to review a wide number of products and services. While tracking them all would be a full-time job, occasionally some offer new ideas.  Here’s a collection of those that have piqued my interest of late:

Sisters eLearning: these folks are taking a kinder, gentler approach to their products and marketing their services.  Their signature offering is a suite of templates for your elearning featuring cooperative play.  Their approach in their custom development is quiet and classy. This is reflected in the way they promote themselves at conferences: they all wear mauve polos and sing beautiful a capella.  Instead of giveaways, they quietly provide free home-baked mini-muffins for all.

Yalms: these folks are offering the ‘post-LMS’. It’s not an LMS, and instead offers course management, hosting, and tracking.  It addresses compliance, and checks a whole suite of boxes such as media portals, social, and many non-LMS things including xAPI. Don’t confuse them with an LMS; they’re beyond that!

MicroBrain: this company has developed a system that makes it easy to take your existing courses and chunk them up into little bits. Then it pushes them out on a schedule. It’s a serendipity model, where there’s a chance it just might be the right bit at the right time, which is certainly better than your existing elearning. Most importantly, it’s mobile!

OffDevPeeps: these folks a full suite of technology development services including mobile, AR, VR, micro, macro, long, short, and anything else you want, all done at a competitive cost. If you are focused on the ‘fast’ and ‘cheap’ side of the trilogy, these are the folks to talk to. Coming soon to an inbox near you!

DanceDanceLearn: provides a completely unique offering. They have developed an authoring tool that makes it easy for you to animate dancers moving in precise formations that spell out content. They also have a synchronized swimming version.  Your content can be even more engaging!

There, I hope you’ll find these of interest, and consider checking them out.

Any relation between the companies portrayed and real entities is purely coincidental.  #couldntstopmyself #allinfun

10 May 2017

Designing Microlearning

Clark @ 8:04 AM

Yesterday, I clarified what I meant about microlearning. Earlier, I wrote about designing microlearning, but what I was really talking about was the design of spaced learning. So how should you design the type of microlearning I really feel is valuable?

To set the stage, here’re we’re talking about layering learning on performance in a context. However, it’s more than just performance support. Performance support would be providing a set of steps (in whatever ways: series of static photos, video, etc) or supporting those steps (checklist, lookup table, etc).  And again, this is a good thing, but microlearning, I contend, is more.

To make it learning, what you really need is to support developing an ability to understand the rationale behind the steps, to support adapting the steps in different situations. Yes, you can do this in performance support as well, but here we’re talking about models

What (causal) models give us is a way to explain what has happened, and predict what will happen.  When we make these available around performing a task, we unpack the rationale. We want to provide an understanding behind the rote steps, to support adaptation of the process in difference situations. We also provide a basis for regenerating missing steps.

Now, we can also be providing examples, e.g. how the model plays out in different contexts. If what the learner is doing now can change under certain circumstances, elaborating how the model guides performing differently in different context provides the ability to transfer that understanding.

The design process, then, would be to identify the model guiding the performance (e..g. why we do things in this order, and it might be an interplay between structural constraints (we have to remove this screw first because…) and causal ones (this is the chemical that catalyzes the process).  We need to identify and determine how to represent.

Once we’ve identified the task, and the associated models, we then need to make these available through the context. And here’s why I’m excited about augmented reality, it’s an obvious way to make the model visible. Quite simply, it can be layered on top of the task itself!   Imagine that the workings behind what you’re doing are available if you want. That you can explore more as you wish, or not, and simply accept the magic ;).

The actual task is the practice, but I’m suggesting providing a model explaining why it’s done this way is the minimum, and providing examples for a representative sample of other appropriate contexts provides support when it’s a richer performance.  Delivered, to be clear, in the context itself. Still, this is what I think really constitutes microlearning.  So what say you?

9 May 2017

Clarifying Microlearning

Clark @ 8:05 AM

I was honored to learn that a respected professor of educational technology liked my definition of micro-learning, such that he presented it as a recent conference.  He asked if I still agreed with it, and I looked back at what I’d written more recently. What I found was that I’d suggested some alternate interpretations, so I thought it worthwhile to be absolutely clear about it.

So, the definition he cited was:

Microlearning is a small, but complete, learning experience, layered on top of the task learners are engaged in, designed to help learners learn how to perform the task.

And I agree with this, with a caveat. In the article, I’d said that it could also be a small complete learning experience, period. My clarification on this is that those are unlikely, and the definition he cited was the most likely, and likely most valuable.

So, I’ve subsequently said (and elaborated on the necessary steps):

What I really think microlearning could and should be is for spaced learning.

Here I’m succumbing to the hype, and trying to put a positive spin on microlearning. Spaced learning is a good thing, it’s just not microlearning. And microlearning really isn’t helping them perform the task in the moment (which is a good thing too), but instead leveraging that moment to also extend their understanding.

No, I like the original definition, where we layer learning on top of a task, leveraging the context and requiring the minimal content to take a task and make it a learning opportunity. That, too, is a good thing. At least I think so. What do you think?

14 March 2017

Microdesign

Clark @ 8:01 AM

There’s been a lot of talk about microlearning of late – definitions, calls for clarity, value propositions, etc – and I have to say that I’m afraid some of it (not what I’ve linked to) is a wee bit facile. Or, at least, conceptually unclear.  And I think that’s a problem. This came up again in a recent conversation, and I had a further thought (which of course I have to blog about ;).  It’s about how to do microdesign, that is, how to design micro learning. And it’s not trivial.

VirusSo one of the common views of micro learning is that it’s just in time. That is, if you need to know how to do something, you look it up.  And that’s just fine (as I’ve recently ranted). But it’s not learning. (In short: it’ll help you in the moment, but unless  you design it to support learning, it’s performance support instead).  You can call it Just In Time support, or microsupport,  but properly, it’s not micro learning.

The other notion is a learning that’s distributed over time. And that’s good.  But this takes a bit more thought. Think about it. If we want to systematically develop somebody over time, it’s not just a steady stream of ‘stuff’.  Ideally, it’s designed to optimally get there, minimizing the time taken on the part of the learner, and yet yield reliable improvements.  And this is complex.

In principle, it should be a steady development, that reactivates and extends learners capabilities in systematic ways. So, you still need your design steps, but you have to think about granularity, forgetting, reactivation, and development in a more fine-grained way.  What’s the minimum launch?  Can you do ought but make sure there’s an initial intro, concept, example, and a first practice?  Then, how much do we need to reactivate versus how much do we have to expand the capability in each iteration? How much is enough?  As Will Thalheimer says in his spaced learning report, the amount and duration of spacing depends on the complexity of the task and the frequency with which it’s performed.

When do you provide more practice, versus another example, versus a different model?  What’s the appropriate gap in complexity?  We’ll likely have to make our best guesses and tune, but we have to think consciously about it.  Just chunking up an existing course into smaller bits isn’t taking the decay of memory over time and the gradual expansion of capability. We have to design an experience!

Microlearning is the right thing to do, given our cognitive architecture. Only so much ‘strengthening’ of the links can happen in any one day, so to develop a full new capability will take time. And that means small bits over time makes sense. But choosing the right bits, the right frequency, the right duration, and the right ramp up in complexity, is non-trivial.  So let’s laud the movement, but not delude ourselves either that performance support or a stream of content is learning. Learning, that is systematically changing the reliable behavior of the most complex thing in the known universe, is inherently complex. We should take it seriously, and we can.

1 February 2017

Other writings

Clark @ 8:04 AM

It occurs to me to mention some of the other places you can find my writings besides here (and how they differ ;).  My blog posts are pretty regular (my aim is 2/week), but tend to have ideas that are embryonic or a bit ‘evangelical’. First, I’ve written four books; you can check them out and get sample chapters at their respective sites:

Engaging Learning: Designing e-Learning Simulation Games

Designing mLearning: Tapping Into the Mobile Revolution for Organizational Performance

The Mobile Academy: mLearning For Higher Education

Revolutionize Learning &  Development: Performance and Information Strategy for the Information Age

They’re designed to be the definitive word on the topic, at least at the moment.

I’ve also written or co-written a number of chapters in a variety of books.  The books include The Really Useful eLearning Instruction ManualCreating a Learning Culture, Michael Allen’s eLearning Annual 2009,  and a bunch of academic handbooks (Mobile Learning, Experiential Learning, Wiley Learning Technology ;).  These tend to be longer than an article, with a pretty thorough coverage of whatever topic is on tap.

Then there are articles in a variety of magazines.  These tend to be aggregated thoughts that are longer than a blog post, but not as through as a chapter. In particular, they are things I think need to be heard (or read).  So, my writing has shown up in:

eLearnMag

Learning Solutions

CLO

The topics vary. (For the eLearnMag ones, you’ll have to search for my name owing to their interface, and they tend to be more like editorials.)

And then there are blog posts for others that are a bit longer than my usual blog post, and close to an article in focus:

The Deeper eLearning series for Learnnovators

A monthly article for Litmos.

These, too, are more like articles in that they’re focused, and deeper than my usual blog post.  For the latter I cover a lot of different topics, so you’re likely to find something relevant there in many different areas.

I’m proud of it all, but for a quick update on a topic, you might be best seeing if there’s a Litmos post on it first.  That’s likely to be relatively short and focused if there is one. And, of course, if it’s a topic you’re interested in advancing in and I can help, do let me know.

5 January 2017

Mobile Lesson

Clark @ 8:04 AM

Designing mLearning bookI’m preparing my keynote for a mobile conference, and it’s caused an interesting reflection.  My mlearning books came out in 2011, and subsequently I’ve written on the revolution.  And I’ve been speaking on both of late, but in some ways the persistent interest in mobile intrigues me.

While my services are pushing the better design of and the bigger picture of elearning, mobile isn’t going away. My trip to China to keynote this past year was on mlearning (and one the year before), and now again I’m talking on the topic.  What does this mean?

As I wrote before, China is much bigger into mobile than we are. It’s likely because we had more ubiquity of internet access and computers, but they’re also a highly mobile populace.  And it makes sense that they’re showing a continuing interest. In fact, they specifically asked for a presentation that was advanced, not my usual introduction.

I’m also going to be presenting on more advanced thinking to the audience coming up, because the entire focus of the event is mlearning and I infer that they’re already up on the basics.  The focus in my books was to get people thinking differently about mobile (because it’s not about courses on a phone), but certainly that was understood in China. I think it’s also understood by most of the developers. I’m less certain about the elearning field (corporate and education), at least not yet.

In many ways, mobile was a catalyst for the revolution.  I think of mlearning as much more than courses, and my models focused on performance support and social more than formal learning. That is really one of the two-fold focuses on the revolution (the “L&D isn’t doing near what it could and should”; to complement the “and what it is doing, it is doing badly” :).  In that way, these devices can be a wedge in the door for a broader focus.

Yet mobile is just a platform for enabling the type of experiences, the types of cognitive support, as any other platform  from conversation to artificial intelligence.  It is an important one, however, with the unique properties of doing things whenever & wherever you are and doing things because of when and where you are.

So I get that mlearning is of interest because of the ubiquity, but the thinking that goes into mobile really goes beyond mobile.  It’s about aligning with us, supporting our needs to communicate and collaborate.  That’s still a need, a useful message, and an opportunity.  Are you mobilizing?

 

Next Page »

Powered by WordPress