Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Making Sense of Research

17 March 2015 by Clark Leave a Comment

A couple of weeks ago, I was riffing on sensors:  how mobile devices are getting equipped with all sorts of new sensors and the potential for more and what they might bring.  As part of that discussion was a brief mention of sensor nets, how aggregating all this data could be of interest too. And low and behold, a massive example was revealed last week.

The context was the ‘spring forward’ event Apple held where they announced their new products.  The most anticipated one was the Apple Watch (which was part of the driving behind my post on wearables), the new iConnected device for your wrist.  The second major announcement was their new Macbook, a phenomenally thin new laptop with some amazing specs on weight and screen display, as well as some challenging tradeoffs.

One announcement that was less noticed was the announcement of a new research endeavor, but I wonder if it isn’t the most game-changing element  of them all.  The announcement was ResearchKit, and it’s about sensor nets.

So, smartphones have lots of sensors.  And the watch will have more.  They can already track a number of parameters about you automatically, such as your walking.  There can be more, with apps that can ask about your eating, weight, or other health measurements.  As I pointed out, aggregating data from sensors could do things like identify traffic jams (Google Maps already does this), or collect data like restaurant ratings.

What Apple has done is to focus specifically on health data from their HealthKit, and partner with research hospitals. What they’re saying to scientists is  “we’ll give you anonymized health data, you put it to good use”. A number of research centers are on board, and already collecting data about asthma and more.  The possibility is to use analytics that combine the power of large numbers with a bunch of other descriptive data to be able to investigate things at scale.  In general, research like this is hard since it’s hard to get large numbers of subjects, but large numbers of subjects is a much better basis for study (for example, the China-Cornell-Oxford Project that was able to look at a vast breadth of diet to make innovative insights into nutrition and health).

And this could be just the beginning: collecting data en masse (while successfully addressing privacy concerns) can be a source of great insight if it’d done right.  Having devices that are with you and capable of capturing a variety of information gives the opportunity to mine that data for expected, and unexpected, outcomes.

A new iDevice is always cool, and while it’s not the first smart watch (nor was the iPhone the first smartphone, the iPad not the first tablet, nor the iPod the first music play), Apple has a way of making the experience compelling.  Like with the iPad, I haven’t yet seen the personal value proposition, so I’m on the fence.  But the ability to collect data in a massive way that could support ground-breaking insights and innovations in medicine? That has the potential for affecting millions of people around the world.  Now  that is impact.

On the road again

3 March 2015 by Clark 3 Comments

Well, some more  travels are imminent, so I thought I’d update you on where the Quinnovation road show would be on tour this spring:

  • March 9-10 I’ll be collaborating  with Sarah Gilbert and Nick Floro to deliver  ATD’s  mLearnNow event in Miami on mobile
  • On the 11th I’ll be at a private event talking the Revolution to a select group  outside Denver
  • Come  the 18th I’ll be inciting the revolution  at the ATD Golden Gate chapter meeting here in the Bay Area
  • On the 25th-27th, I’ll be in Orlando again instigating at  the eLearning Guild’s Learning Solutions conference
  • May 7-8 I’ll be kicking up my heels about the revolution for the eLearning Symposium in Austin
  • I’ll be stumping the revolution at another vendor event in Las Vegas 12-13
  • And June 2-3 I’ll be myth-smashing for  ATD Atlanta, and then workshopping game design

So, if you’re at one of these, do come up and introduce yourself and say hello!

 

 

mLearning more than mobile elearning?

25 February 2015 by Clark Leave a Comment

Someone tweeted about their mobile learning credo, and mentioned the typical ‘mlearning is elearning, extended’ view. Which I rejected, as  I believe mlearning is much more (and so should elearning be).  And then I thought about it some more.  So I’ll lay out my thinking, and see what you think.

I have been touting that mLearning could and should be focused, as should P&D, on  anything  that helps us achieve our goals better. Mobile, paper, computers, voodoo, whatever technology works.  Certainly in organizations.  And this yields some interesting implications.

So, for instance, this  would include performance support and social networks.  Anything that requires understanding how people work and learn would be fair game. I was worried about whether that fit some operational aspects like IT and manufacturing processes, but I think I’ve got that sorted.  UI folks would work on external products, and any internal software development, but around that, helping folks use tools and processes belongs to those of us who facilitate organizational performance and development.  So we, and mlearning, are about any of those uses.

But the person, despite seeming to come from an vendor to orgs, not schools, could be talking about schools instead, and I wondered whether mLearning for schools, definitionally, really is about only supporting learning.  And I can see the case for that; that mlearning in education is about using mobile to help people learn, not perform.  It’s about collaboration, for sure, and tools to assist.

Note I’m not making the case for schools as they are,  a curriculum rethink definitely needs to accompany using technology in schools in many ways.  Koreen Pagano wrote this nice post separating Common Core teaching versus  assessment, which goes along with my beliefs about the value of problem solving.  And I also laud Roger Schank‘s views, such as the value (or not) of the binomial theorem as a classic example.

But then, mobile should be a tool in learning, so it can work as a channel for content, but also for communication, and capture, and compute (e.g. the 4C’s of mlearning).  And the emergent capability of contextual support (the 5th C, e.g.  combinations of the first four).  So this view would argue that mlearning  can be used for performance support in accomplishing a meaningful task that’s part of an learning  experience.

That would take me back to mlearning being more than just mobile elearning, as Jason Haag  has aptly separated.  Sure, mobile elearning can be  a subset of mlearning, but not the whole picture. Does this make sense to you?

Making ‘sense’

24 February 2015 by Clark 1 Comment

I recently wrote about wearables, where I focused on form factor and information channels.  An article I recently read talked about a guy who builds spy gear, and near the end he talked about some things that started me thinking about an extension of that for all mobile, not just wearables.  The topic is  sensors.

In the article, he talks about how, in the future, glasses could detect whether you’ve been around bomb-making materials:

“You can literally see residue on someone if your glasses emit a dozen different wavelengths of microlasers that illuminate clothing in real time and give off a signature of what was absorbed or reflected.”

That’s pretty amazing, chemical spectrometry on the fly.  He goes on to talk about distance vision:

“Imagine you have a pair of glasses, and you can just look at a building 50 feet away, 100 feet away, and look right through the building and see someone moving around.”

 Now, you might nor might not like what he’s doing with that, but imagine applying it elsewhere: identifying where people are for rescue, or identifying materials for quality control.

Heck, I’d find it interesting just to augment the camera with infrared and ultraviolet: imagine being able to use the camera on your phone or glasses to see what’s happening at night, e.g. wildlife (tracking coyotes or raccoons, and managing to avoid skunks!).  Night vision, and seeing things that fluoresce under UV would both be really cool additions.

I’d be interested too in having them able to work to enlarge as well, bring small things to light like a magnifying glass or microscope.

It made me think about all the senses we could augment. I was thinking about walking our dogs, and how their olfactory life is much richer than ours.  They are clearly sensing things beyond our olfactory capabilities, and it would be interesting to have some microscent detectors that could track faint traces to track animals (or know which owner is not adequately controlling a dog, ahem).  They could potentially serve as smoke or carbon monoxide detectors also.

Similarly, auditory enhancement: could we hear things fainter than our ears detect, or have them serve as a stethoscope?  Could we detect far off cries for help that our ears can’t? Of course, that could be misused, too, to eavesdrop on conversations.  Interesting ethical issues come in.

And we’ve already heard about the potential to measure one’s movement, blood pressure, pulse, temperature, and maybe even blood sugar, to track one’s health.  The fit bands are getting smarter and more capable.

There is the possibility for  other things we personally can’t  directly track: measuring ambient temperatures quantitatively, and air pressure are both already possible and in some devices.  The thermometer could be a health and weather guide,  and a  barometer/altimeter would be valuable for hiking in addition to weather.

The combination of reporting these could be valuable too.  Sensor nets, where the data from many micro sensors are aggregated have interesting possibilities. Either with known combinations, such as aggregating temperature and air pressure  help with weather, or machine learning  where for example  we include sensitive motion detectors,  and might be able to learn to predict earthquakes like supposedly animals can.  Sounds too could be used to triangulate on cries for help, and material detectors could help locate sources of pollution.

We’ve done amazing things with technology, and sensors are both shrinking and getting more powerful. Imagine having sensors scattered about your body in various wearables and integrating that data in known ways, and agreeing for anonymous aggregation for data mining.  Yes, there are concerns, but benefits too.

We can put these together in interesting ways, notifications of things we should pay attention to, or just curiosity to observe things our natural senses can’t detect.  We can open up the world in powerful ways to support being more informed  and more productive.  It’s up to us to harness it in worthwhile ways.

Wearables?

21 January 2015 by Clark 3 Comments

In a discussion last week, I suggested that  the  things I was excited about included wearables. Sure enough, someone asked if I’d written anything about it, and I haven’t, much. So here are some initial thoughts.

I admit I was not a Google Glass ‘Explorer’ (and now the program has ended).  While tempted to experiment, I tend not to spend money until I see how the device is really going to make me more productive.  For instance, when the iPad was first announced, I didn’t want one. Between the time it was announced and it was available, however, I figured out how I’d use it produce, not just consume.   I got one the first day it came out.  By the same rationale,  I got a Palm Pilot pretty early on, and it made me much more effective.   I haven’t gotten a wrist health band, on the other hand, though I don’t think they’re bad ideas, just not what I need.

The point being that I want to see a clear value proposition before I spend  my  hard earned money.  So what am I thinking in regards to wearables? What wearables do I mean?  I am  talking wrist devices, specifically.  (I   may eventually warm up to glasses as well, when what they can do is more augmented reality than they do now.)  Why wrist devices?  That’s what I’m wrestling with, trying to conceptualize what is a more intuitive assessment.

Part of it, at least, is that it’s with me all the time, but in an unobtrusive way.  It supports a quick flick of the wrist instead of pulling out a whole phone. So it can do that ‘smallest info’ in an easy way. And, more importantly, I think it can bring things to my attention more subtly than can a phone.  I don’t need a loud ringing!

I admit that I’m keen on a more mixed-initiative  relationship than I currently have with my technology.  I use my smartphone to get things I need, and it can alert me to things that I’ve indicated I’m interested in, such as events that I want an audio alert for.  And of course, for incoming calls.  But what about for things that my systems come up with on their own?  This is increasingly possible, and again desirable.  Using context, and if a system had some understanding of my goals, it might be able to be proactive.  So imagine you’re out and about, and your watch reminds you that while you were  here you wanted to pick up something nearby, and provide the item and location.  Or to prep for that upcoming meeting and provide some minimal but useful info.   Note that this is  not what’s currently on offer, largely.  We already have geofencing to do some, but right now for it to happen you largely have to pull out your phone or have it give a largely intrusive noise to be heard from your pocket or purse.

So two things about this: one why the watch and not the phone, and the other, why not the glasses?  The watch form factor is, to me, a more accessible interface to serve as a interactive companion. As I suggested, pulling it out of the pocket, turning it on, going through the security check (even just my fingerprint), adds more of an overhead than I necessarily want.  If I can have something less intrusive, even as part of a system and not fully capable on it’s own, that’s OK.  Why not glasses? I guess it’s just that they seem more unnatural.  I am accustomed to having information on my wrist, and while I wear glasses, I want them to be invisible to me.  I would love to have a heads-up display at times, but all the time would seem to get annoying. I’ll stretch and suggest that the empirical result that most folks have stopped wearing them most of the time bears up my story.

Why not a ring, or a pendant, or?  A ring seems to have  too small an interface area.  A pendant isn’t easily observable. On my wrist is easy for a glance (hence, watches).  Why not a whole forearm console?  If I need that much interface, I can always pull out my phone.  Or jump to my tablet. Maybe I will eventually will want an iBracer, but I’m not yet convinced. A forearm holster for my iPhone?  Hmmm…maybe too geeky.

So, reflecting on all this, it appears I’m thinking about tradeoffs of utility versus intrusion.  A wrist devices seems to fit a sweet spot in an ecosystem of tech for the quick glance, the pocket access, and then various tradeoffs of size and weight for a real productivity  between tablets and laptops.

Of course, the real issue is whether there’s sufficient information available through the watch that it makes a value proposition. Is there enough that’s easy to get to that doesn’t require a phone?  Check the temperature?  Take a (voice) note?  Get a reminder, take a call, check your location? My instinct is that there is.  There are times I’d be happy to not have to take my phone (to the store, to a party) if I could take calls on my wrist, do minimal note taking and checking, and navigating.  For the business perspective, also have performance support whether push or pull.  I don’t see it for courses, but for just-in-time…  And contextual.

This is all just thinking aloud at this point.  I’m contemplating the iWatch but don’t have enough information as of yet.  And I may not feel the benefits outweigh the costs. We’ll see.

#DevLearn 14 Reflections

5 November 2014 by Clark 1 Comment

This past week I was at the always great DevLearn conference, the biggest and arguably best yet.  There were some hiccups in my attendance, as  several blocks of time were taken up with various commitments both work and personal, so for instance I didn’t really get a chance to peruse the expo at all.  Yet I attended keynotes and sessions, as well as presenting, and hobnobbed with folks both familiar and new.

The keynotes were arguably even better than before, and a high bar had already been set.

Neil deGrasse Tyson was eloquent and passionate about the need for science and the lack of match between school and life.    I had a quibble about his statement that doing math teaches problem-solving, as it takes the right type of problems (and Common Core is a step in the right direction)  and  it takes explicit scaffolding.  Still, his message was powerful and well-communicated. He also made an unexpected connection between Women’s Liberation and the decline of school quality that I hadn’t considered.

Beau Lotto also spoke, linking how our past experience alters our perception to necessary changes in learning.  While I was familiar with the beginning point of perception (a fundamental part of cognitive science, my doctoral field), he took it in very interesting and useful direction in an engaging and inspiring way.  His take-home message: teach not how to see but how to look, was succinct and apt.

Finally, Belinda Parmar took on the challenge of women in technology, and documented how  small changes can  make a big difference. Given the madness of #gamergate, the discussion was a useful reminder of inequity in many fields and for many.  She left lots of time to have a meaningful discussion about the issues, a nice touch.

Owing to the commitments both personal and speaking, I didn’t get to see many sessions. I had the usual situation of  good ones, and a not-so-good one (though I admit my criteria is kind of high).  I like that the Guild balances known speakers and topics with taking some chances on both.  I also note that most of the known speakers are those folks I respect that continue to think ahead and bring new perspectives, even if in a track representing their work.  As a consequence, the overall quality is always very high.

And the associated events continue to improve.  The DemoFest was almost too big this year, so many examples that it’s hard to start looking at them as you want to be fair and see all but it’s just too monumental. Of course, the Guild had a guide that grouped them, so you could drill down into the ones you wanted to see.  The expo reception was a success as well, and the various snack breaks suited the opportunity to mingle.  I kept missing the ice cream, but perhaps that’s for the best.

I was pleased to have the biggest turnout yet for a workshop, and take the interest in elearning strategy as an indicator that the revolution is taking hold.  The attendees were faced with the breadth of things to consider across advanced ID, performance support, eCommunity, backend integration, decoupled delivery, and then were led through the process of identifying elements and steps in the strategy.  The informal feedback was that, while daunted by the scope, they were excited by the potential and recognizing the need to begin.  The fact that the Guild is holding the Learning Ecosystem conference and their release of a new and quite good white paper by Marc Rosenberg and Steve Foreman are further evidence that awareness is growing.   Marc and Steve carve up the world a little differently than I do, but we say similar things about what’s important.

I am also pleased that  Mobile  interest continues to grow, as evidenced by the large audience at our mobile panel, where I was joined by other mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell.  They provide nicely differing  viewpoints, with Sarah representing the irreverent designer, Robert the pragmatic systems perspective, and Chad the advanced technology view, to complement my more  conceptual approach.  We largely agree, but represent different ways of communicating and thinking about the topic. (Sarah and I will be joined by Nick Floro for ATD’s mLearnNow event in New Orleans next week).

I also talked about trying to change the pedagogy of elearning in the Wadhwani Foundation, the approach we’re taking and the challenges we face.  The goal I’m involved in is job skilling, and consequently there’s a real need and a real opportunity.  What I’m fighting for is to make meaningful practice as a way to achieve real outcomes.  We have some positive steps and some missteps, but I think we have the chance  to have a real impact. It’s a work in progress, and fingers crossed.

So what did I learn?  The good news is that the audience is getting smarter, wanting more depth in their approaches and breadth in what they address. The bad news appears to be that the view of ‘information dump & knowledge test = learning’ is still all too prevalent. We’re making progress, but too slowly (ok, so perhaps patience isn’t my strong suit ;).  If you haven’t, please do check out the Serious eLearning Manifesto to get some guidance about what I’m talking about (with my colleagues Michael Allen, Julie Dirksen, and Will Thalheimer).  And now there’s an app for that!

If you want to get your mind around the forefront of learning technology, at least in the organizational space, DevLearn is the place to be.

 

Cognitive prostheses

28 October 2014 by Clark 2 Comments

While our cognitive architecture has incredible capabilities (how else could we come up with advances such as Mystery Science Theater 3000?), it also has limitations. The same adaptive capabilities that let us cope with information overload in both familiar and new ways also lead to some systematic flaws. And it led me to think about the ways in which we support these limitations, as they have implications for designing solutions for our organizations.

The first limit is at the sensory level. Our mind actually processes pretty much all the visual and auditory sensory data that arrives, but it disappears pretty quickly (within milliseconds) except for what we attend to. Basically, your brain fills in the rest (which leaves open the opportunity to make mistakes). What do we do? We’ve created tools that allow us to capture things accurately: cameras and microphones with audio recording. This allows us to capture the context exactly, not as our memory reconstructs it.

A second limitation is our ‘working’ memory. We can’t hold too much in mind at one time. We ‘chunk’ information together as we learn it, and can then hold more total information at one time. Also, the format of working memory largely is ‘verbal’. Consequently, using tools like diagramming, outlines, or mindmaps add structure to our knowledge and support our ability to work on it.

Another limitation to our working memory is that it doesn’t support complex calculations, with many intermediate steps. Consequently we need ways to deal with this. External representations (as above), such as recording intermediate steps, works, but we can also build tools that offload that process, such as calculators. Wizards, or interactive dialog tools, are another form of a calculator.

Processing information in short term memory can lead to it being retained in long term memory. Here the storage is almost unlimited in time and scope, but it is hard to get in there, and isn’t remembered exactly, but instead by meaning. Consequently, models are a better learning strategy than rote learning. But external sources like the ability to look up or search for information is far better than trying to get it in the head.

Similarly, external support for when we do have to do things by rote is a good idea. So, support for process is useful and the reason why checklists have been a ubiquitous and useful way to get more accurate execution.

In execution, we have a few flaws too. We’re heavily biased to solve new problems in the ways we’ve solved previous problems (even if that’s not the best approach. We’re also likely to use tools in familiar ways and miss new ways to use tools to solve problems. There are ways to prompt lateral thinking at appropriate times, and we can both make access to such support available, and even trigger same if we’ve contextual clues.

We’re also biased to prematurely converge on an answer (intuition) rather than seek to challenge our findings. Access to data and support for capturing and invoking alternative ways of thinking are more likely to prevent such mistakes.

Overall, our use of more formal logical thinking fatigues quickly. Scaffolding help like the above decreases the likelihood of a mistake and increases the likelihood of an optimal outcome.

When you look at performance gaps, you should look to such approaches first, and look to putting information in the head last. This more closely aligns our support efforts with how our brains really think, work, and learn. This isn’t a complete list, I’m sure, but it’s a useful beginning.

#DevLearn Schedule

24 October 2014 by Clark Leave a Comment

As usual, I will be at DevLearn (in Las Vegas) this next week, and welcome meeting up with you there.  There  is a lot going on.  Here’re the things I’m involved in:

  • On Tuesday, I’m running an all day workshop on eLearning Strategy. (Hint: it’s really a Revolutionize L&D  workshop  ;).  I’m pleasantly surprised at how many folks will be there!
  • On Wednesday at 1:15 (right after lunch), I’ll be speaking on the design approach  I’m leading  at the Wadhwani Foundation, where we’re trying to integrate learning science with pragmatic execution.  It’s at least partly a Serious eLearning Manifesto session.
  • On Wednesday at 2:45, I’ll be part of a panel on mlearning with my fellow mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell, chaired by conference program director David Kelly.

Of course, there’s much more. A few things I’m looking forward to:

  • The  keynotes:
    •  Neil DeGrasse Tyson, a fave for his witty support  of science
    • Beau Lotto talking about perception
    • Belinda Parmar talking about women in tech (a burning issue right now)
  • DemoFest, all the great examples people are bringing
  • and, of course, the networking opportunities

DevLearn is probably my favorite conference of the year: learning focused, technologically advanced, well organized, and with the right people.  If you can’t make it this year, you might want to put it on your calendar for another!

Extending Mobile Models

21 October 2014 by Clark Leave a Comment

In preparation for a presentation, I was reviewing my mobile models. You may recall I started with my 4C‘s model (Content, Compute, Communicate, & Capture), and have mapped that further onto Augmenting Formal, Performance Support, Social, & Contextual.  I’ve refined it as well, separating out contextual and social as different ways of looking at formal and performance support.  And, of course, I’ve elaborated  it again, and wonder whether you think this more detailed conceptualization makes sense.

self and social mlearning contentSo, my starting point was realizing that it wasn’t just  content.  That is, there’s a difference between compute and content where the interactivity was an important part of the 4C’s, so that the characteristics in the content box weren’t discriminated enough.  So the new two initial sections are mlearning content and mlearning compute, by self or social.  So, we can be getting things for an individual, or it can be something that’s socially generated or socially enabled.

mLearningComputeThe point is that content is prepared media, whether text, audio, or video.  It can be delivered or accessed as needed. Compute, interactive capability, is harder, but potentially more valuable. Here, an individual might actively practice, have mixed initiative dialogs, or even work with others or tools to develop an outcome or update some existing shared resources.

mLearningCaptureThings get more complex when we go beyond these elements.  So I had capture as one thing, and I’m beginning to think it’s two: one is the capture of current context and keeping sharing that for various purposes, and the other is the system using that context  to do something unique.

To be clear here, capture is where you use the text insertion, microphone, or camera to catch unique contextual data (or user input).  It could also be other such data, such as a location, time, barometric pressure, temperature, or more. This data, then, is available to review, reflect on, or more.  It can be combinations, of course, e.g. a picture at this time and this location.

mLearningContextualNow, if the system  uses this information to do something different than under other circumstances, we’re contextualizing what we do. Whether it’s because of when you are, providing specific information, or where you are, using location characteristics, this is likely to be the most valuable opportunity.   Here I’m thinking alternate reality games or augmented reality (whether it’s voiceover, visual overlays, what have you).

And I  think  this is device independent, e.g. it could apply to watches or glasses or..as well as phones and tablets.  It means my 4 C’s become: content, compute, capture, and contextualize.  To ponder.

So, this is a more nuanced look at the mobile opportunities, and certainly more complex as well. Does the greater detail provide greater benefit?

 

 

Learning in 2024 #LRN2024

17 September 2014 by Clark 1 Comment

The eLearning Guild is celebrating it’s 10th year, and is using the opportunity to reflect on what learning will look like 10 years from now.  While I couldn’t participate in the twitter chat they held, I optimistically weighed in: “learning in 2024 will look like individualized personal mentoring via augmented reality, AI, and the network”.  However, I thought I would elaborate in line with a series of followup posts leveraging the #lrn2024 hashtag.  The twitter chat had a series of questions, so I’ll address them here (with a caveat that our learning really hasn’t changed, our wetware hasn’t evolved in the past decade and won’t again in the next; our support of learning is what I’m referring to here):

1. How has learning changed in the last 10 years (from the perspective of the learner)?

I reckon the learner has seen a significant move to more elearning instead of an almost complete dependence on face-to-face events.  And I reckon most learners have begun to use technology in their own ways to get answers, whether via the Google, or social networks like FaceBook and LinkedIn.  And I expect they’re seeing more media such as videos and animations, and may even be creating their own. I also expect that the elearning they’re seeing is not particularly good, nor improving, if not actually decreasing in quality.  I expect they’re seeing more info dump/knowledge test, more and more ‘click to learn more‘, more tarted-up drill-and-kill.  For which we should apologize!

2.  What is the most significant change technology has made to organizational learning in the past decade?

I reckon there are two significant changes that have happened. One is rather subtle as yet, but will be profound, and that is the ability to track more activity, mine more data, and gain more insights. The ExperienceAPI coupled  with analytics is a huge opportunity.  The other is the rise of social networks.  The ability to stay more tightly coupled with colleagues, sharing information and collaborating, has really become mainstream in our lives, and is going to have a big impact on our organizations.  Working ‘out loud’, showing our work, and working together is a critical inflection point in bringing learning back into the workflow in a natural way and away from the ‘event’ model.

3.  What are the most significant challenges facing organizational learning today?

The most significant change is the status quo: the belief that an information oriented event model has any relationship to meaningful outcomes.  This plays out in so many ways: order-taking for courses, equating information with skills, being concerned with speed and quantity instead of quality of outcomes, not measuring the impact, the list goes on.   We’ve become self-deluded that an LMS and a rapid elearning tool means you’re doing something worthwhile, when it’s profoundly wrong.  L&D needs a revolution.

4.  What technologies will have the greatest impact on learning in the next decade? Why?

The short answer is mobile.  Mobile is the catalyst for change. So many other technologies go through the hype cycle: initial over-excitement, crash, and then a gradual resurgence (c.f. virtual worlds), but mobile has been resistant for the simple reason that there’s so much value proposition.  The cognitive augmentation that digital technology provides, available whenever and wherever you are clearly has benefits, and it’s not courses!  It will naturally incorporate augmented reality with the variety of new devices we’re seeing, and be contextualized as well.  We’re seeing a richer picture of how technology can support us in being effective, and L&D can facilitate these other activities as a way to move to a more strategic and valuable role in the organization.  As above, also new tracking and analysis tools, and social networks.  I’ll add that simulations/serious games are an opportunity that is yet to really be capitalized on.  (There are reasons I wrote those books :)

5.  What new skills will professionals need to develop to support learning in the future?

As I wrote  (PDF), the new skills that are necessary fall into two major categories: performance consulting and interaction facilitation.  We need to not design courses until we’ve ascertained that no other approach will work, so we need to get down to the real problems. We should hope that the answer comes from the network when it can, and we should want to design performance support solutions  if it can’t, and reserve courses for only when it absolutely has to be in the head. To get good outcomes from the network, it takes facilitation, and I think facilitation is a good model for promoting innovation, supporting coaching and mentoring, and helping individuals develop self-learning skills.  So the ability to get those root causes of problems, choose between solutions, and measure the impact are key for the first part, and understanding what skills are needed by the individuals (whether performers or mentors/coaches/leaders) and how to develop them are the key new additions.

6.  What will learning look like in the year 2024?

Ideally, it would look like an ‘always on’ mentoring solution, so the experience is that of someone always with you to watch your performance and provide just the right guidance to help you perform in the moment and develop you over time. Learning will be layered on to your activities, and only occasionally will require some special events but mostly will be wrapped around your life in a supportive way.  Some of this will be system-delivered, and some will come from the network, but it should feel like you’re being cared for  in the most efficacious way.

In closing,  I note that, unfortunately,my Revolution book and the Manifesto were both driven by a sense of frustration around the lack of meaningful change in L&D. Hopefully, they’re riding or catalyzing the needed change, but in a cynical mood I might believe that things won’t change near as much as I’d hope. I also remember a talk (cleverly titled:  Predict Anything but the Future  :) that said that the future does tend  to come as an informed basis would predict  with an unexpected twist,  so it’ll be interesting to discover what that twist will be.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok