Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

25 March 2015

Tom Wujec #LSCon Keynote Mindmap

Clark @ 7:02 am

Tom Wujec gave a discursive and well illustrated talk about how changes in technology were changing industry, ultimately homing in on creativity.  Despite a misstep mentioning Kolb’s invalid learning styles instrument, it was entertaining and intriguing.

 

24 March 2015

Tech Limits?

Clark @ 8:26 am

A couple of times last year, firms with some exciting learning tools approached me to talk about the market.  And in both cases, I had to advise them that there were some barriers they’d have to address. That was brought home to me in another conversation, and it makes me worry about the state of our industry.

So the first tool is based upon a really sound pedagogy that is consonant with my activity-based learning approach.  The basis is giving learners assignments very much like the assignments they’ll need to accomplish in the workplace, and then resourcing them to succeed.  They wanted to make it easy for others to create these better learning designs (as part of a campaign for better learning). The only problem was, you had to learn the design approach as well as the tool. Their interface wasn’t ready for prime time, but the real barrier was getting people to be able to use a new tool. I indicated some of the barriers, and they’re reconsidering (while continuing to develop content against this model as a service).

The second tool supports virtual role plays in a powerful way, having smart agents that react in authentic ways. And they, too, wanted to provide an authoring tool to create them.  And again my realistic assessment of the market was that people would have trouble understanding the tool.  They decided to continue to develop the experiences as a service.

Now, these are somewhat esoteric designs, though the former should be the basis of our learning experiences, and the latter would be a powerful addition to support a very common and important type of interaction.  The more surprising, and disappointing, issue came up with a conversation earlier this year with a proponent of a more familiar tool.

Without being specific (I’ve not received permission to disclose the details in all of the above), this person indicated that when training a popular and fairly straightforward tool, that the biggest barrier wasn’t the underlying software model. I was expecting that too much of training was based upon rote assignments without an underlying model, and that is the case, but instead there was a more fundamental barrier: too many potential users just didn’t have sufficient computer skills!  And I’m not talking about programming code, but instead fundamental understandings of files and ‘styles‘ and other core computing elements just were not present in sufficient quantities in these would-be authors. Seriously!

Now I’ve complained before that we’re not taking learning design seriously, but obviously we’re compounded by a lack of fundamental computer skills.  Folks, this is elearning, not chalk learning, not chalk talk, not edoing, etc.  If you struggle to add new apps on your computer, or find files, you’re not ready to be an elearning developer.

I admit that I struggle to see how folks can assume that without knowledge of design, nor knowledge of technology, that they can still be elearning designers and developers. These tools are scaffolding to allow your designs to be developed. They don’t do design, nor will they magically cover up for lacks of tech literacy.

So, let’s get realistic.  Learn about learning design, and get comfortable with tech, or please, please, don’t do elearning.  And I promise not to do music, architecture, finance, and everything else I’m not qualified to. Fair enough?

 

17 March 2015

Making Sense of Research

Clark @ 7:37 am

A couple of weeks ago, I was riffing on sensors: how mobile devices are getting equipped with all sorts of new sensors and the potential for more and what they might bring.  As part of that discussion was a brief mention of sensor nets, how aggregating all this data could be of interest too. And low and behold, a massive example was revealed last week.

The context was the ‘spring forward’ event Apple held where they announced their new products.  The most anticipated one was the Apple Watch (which was part of the driving behind my post on wearables), the new iConnected device for your wrist.  The second major announcement was their new Macbook, a phenomenally thin new laptop with some amazing specs on weight and screen display, as well as some challenging tradeoffs.

One announcement that was less noticed was the announcement of a new research endeavor, but I wonder if it isn’t the most game-changing element of them all.  The announcement was ResearchKit, and it’s about sensor nets.

So, smartphones have lots of sensors.  And the watch will have more.  They can already track a number of parameters about you automatically, such as your walking.  There can be more, with apps that can ask about your eating, weight, or other health measurements.  As I pointed out, aggregating data from sensors could do things like identify traffic jams (Google Maps already does this), or collect data like restaurant ratings.

What Apple has done is to focus specifically on health data from their HealthKit, and partner with research hospitals. What they’re saying to scientists is “we’ll give you anonymized health data, you put it to good use”. A number of research centers are on board, and already collecting data about asthma and more.  The possibility is to use analytics that combine the power of large numbers with a bunch of other descriptive data to be able to investigate things at scale.  In general, research like this is hard since it’s hard to get large numbers of subjects, but large numbers of subjects is a much better basis for study (for example, the China-Cornell-Oxford Project that was able to look at a vast breadth of diet to make innovative insights into nutrition and health).

And this could be just the beginning: collecting data en masse (while successfully addressing privacy concerns) can be a source of great insight if it’d done right.  Having devices that are with you and capable of capturing a variety of information gives the opportunity to mine that data for expected, and unexpected, outcomes.

A new iDevice is always cool, and while it’s not the first smart watch (nor was the iPhone the first smartphone, the iPad not the first tablet, nor the iPod the first music play), Apple has a way of making the experience compelling.  Like with the iPad, I haven’t yet seen the personal value proposition, so I’m on the fence.  But the ability to collect data in a massive way that could support ground-breaking insights and innovations in medicine? That has the potential for affecting millions of people around the world.  Now that is impact.

24 February 2015

Making ‘sense’

Clark @ 8:19 am

I recently wrote about wearables, where I focused on form factor and information channels.  An article I recently read talked about a guy who builds spy gear, and near the end he talked about some things that started me thinking about an extension of that for all mobile, not just wearables.  The topic is  sensors.

In the article, he talks about how, in the future, glasses could detect whether you’ve been around bomb-making materials:

“You can literally see residue on someone if your glasses emit a dozen different wavelengths of microlasers that illuminate clothing in real time and give off a signature of what was absorbed or reflected.”

That’s pretty amazing, chemical spectrometry on the fly.  He goes on to talk about distance vision:

“Imagine you have a pair of glasses, and you can just look at a building 50 feet away, 100 feet away, and look right through the building and see someone moving around.”

 Now, you might nor might not like what he’s doing with that, but imagine applying it elsewhere: identifying where people are for rescue, or identifying materials for quality control.

Heck, I’d find it interesting just to augment the camera with infrared and ultraviolet: imagine being able to use the camera on your phone or glasses to see what’s happening at night, e.g. wildlife (tracking coyotes or raccoons, and managing to avoid skunks!).  Night vision, and seeing things that fluoresce under UV would both be really cool additions.

I’d be interested too in having them able to work to enlarge as well, bring small things to light like a magnifying glass or microscope.

It made me think about all the senses we could augment. I was thinking about walking our dogs, and how their olfactory life is much richer than ours.  They are clearly sensing things beyond our olfactory capabilities, and it would be interesting to have some microscent detectors that could track faint traces to track animals (or know which owner is not adequately controlling a dog, ahem).  They could potentially serve as smoke or carbon monoxide detectors also.

Similarly, auditory enhancement: could we hear things fainter than our ears detect, or have them serve as a stethoscope?  Could we detect far off cries for help that our ears can’t? Of course, that could be misused, too, to eavesdrop on conversations.  Interesting ethical issues come in.

And we’ve already heard about the potential to measure one’s movement, blood pressure, pulse, temperature, and maybe even blood sugar, to track one’s health.  The fit bands are getting smarter and more capable.

There is the possibility for other things we personally can’t directly track: measuring ambient temperatures quantitatively, and air pressure are both already possible and in some devices.  The thermometer could be a health and weather guide, and a barometer/altimeter would be valuable for hiking in addition to weather.

The combination of reporting these could be valuable too.  Sensor nets, where the data from many micro sensors are aggregated have interesting possibilities. Either with known combinations, such as aggregating temperature and air pressure  help with weather, or machine learning  where for example we include sensitive motion detectors,  and might be able to learn to predict earthquakes like supposedly animals can.  Sounds too could be used to triangulate on cries for help, and material detectors could help locate sources of pollution.

We’ve done amazing things with technology, and sensors are both shrinking and getting more powerful. Imagine having sensors scattered about your body in various wearables and integrating that data in known ways, and agreeing for anonymous aggregation for data mining.  Yes, there are concerns, but benefits too.

We can put these together in interesting ways, notifications of things we should pay attention to, or just curiosity to observe things our natural senses can’t detect.  We can open up the world in powerful ways to support being more informed and more productive.  It’s up to us to harness it in worthwhile ways.

27 January 2015

70:20:10 and the Learning Curve

Clark @ 8:09 am

My colleague Charles Jennings recently posted on the value of autonomous learning (worth reading!), sparked by a diagram provided by another ITA colleague, Jane Hart (that I also thought was insightful). In Charles’ post he also included an IBM diagram that triggered some associations.

So, in IBM’s diagram, they talked about: the access phase where learning is separate, the integration where learning is ‘enabled’ by work, and the on-demand phase where learning is ‘embedded’. They talked about ‘point solutions’ (read: courses) for access, then blended models for integration, and dynamic models for on demand. The point was that the closer to the work that learning is, the more value.

However, I was reminded of Fits & Posner’s model of skill acquisition, which has 3 phases of cognitive, associative, and autonomous learning. The first, cognitive, is when you benefit from formal instruction: giving you models and practice opportunities to map actions to an explicit framework. (Note that this assumes a good formal learning design, not rote information and knowledge test!)  Then there’s an associative stage where that explicit framework is supported in being contextualized and compiled away.  Finally, the learner continues to improve through continual practice.

I was initially reminded of Norman & Rumelhart’s accretion, restructuring, and tuning learning mechanisms, but it’s not quite right. Still, you could think of accreting the cognitive and explicitly semantic knowledge, then restructuring that into coarse skills that don’t require as much conscious effort, until it becomes a matter of tuning a finely automated skill.

721LearningCurveThis, to me, maps more closely to 70:20:10, because you can see the formal (10) playing a role to kick off the semantic part of the learning, then coaching and mentoring (the 20) support the integration or association of the skills, and then the 70 (practice, reflection, and personal knowledge mastery including informal social learning) takes over, and I mapped it against a hypothetical improvement curve.

Of course, it’s not quite this clean. While the formal often does kick off the learning, the role of coaching/mentoring and the personal learning are typically intermingled (though the role shifts from mentee to mentor ;). And, of course, the ratios in 70:20:10 are only a framework for rethinking investment, not a prescription about how you apply the numbers.  And I may well have the curve wrong (this is too flat for the normal power law of learning), but I wanted to emphasize that the 10 only has a small role to play in moving performance from zero to some minimal level, that mentoring and coaching really help improve performance, and that ongoing development requires a supportive environment.

I think it’s important to understand how we learn, so we can align our uses of technology to support them in productive ways. As this suggests, if you care about organizational performance, you are going to want to support more than the course, as well as doing the course right.  (Hence the revolution. :)

#itashare

21 January 2015

Wearables?

Clark @ 8:22 am

In a discussion last week, I suggested that the things I was excited about included wearables. Sure enough, someone asked if I’d written anything about it, and I haven’t, much. So here are some initial thoughts.

I admit I was not a Google Glass ‘Explorer’ (and now the program has ended).  While tempted to experiment, I tend not to spend money until I see how the device is really going to make me more productive.  For instance, when the iPad was first announced, I didn’t want one. Between the time it was announced and it was available, however, I figured out how I’d use it produce, not just consume.   I got one the first day it came out.  By the same rationale, I got a Palm Pilot pretty early on, and it made me much more effective.   I haven’t gotten a wrist health band, on the other hand, though I don’t think they’re bad ideas, just not what I need.

The point being that I want to see a clear value proposition before I spend my hard earned money.  So what am I thinking in regards to wearables? What wearables do I mean?  I am talking wrist devices, specifically.  (I  may eventually warm up to glasses as well, when what they can do is more augmented reality than they do now.)  Why wrist devices?  That’s what I’m wrestling with, trying to conceptualize what is a more intuitive assessment.

Part of it, at least, is that it’s with me all the time, but in an unobtrusive way.  It supports a quick flick of the wrist instead of pulling out a whole phone. So it can do that ‘smallest info’ in an easy way. And, more importantly, I think it can bring things to my attention more subtly than can a phone.  I don’t need a loud ringing!

I admit that I’m keen on a more mixed-initiative relationship than I currently have with my technology.  I use my smartphone to get things I need, and it can alert me to things that I’ve indicated I’m interested in, such as events that I want an audio alert for.  And of course, for incoming calls.  But what about for things that my systems come up with on their own?  This is increasingly possible, and again desirable.  Using context, and if a system had some understanding of my goals, it might be able to be proactive. So imagine you’re out and about, and your watch reminds you that while you were  here you wanted to pick up something nearby, and provide the item and location.  Or to prep for that upcoming meeting and provide some minimal but useful info.   Note that this is not what’s currently on offer, largely.  We already have geofencing to do some, but right now for it to happen you largely have to pull out your phone or have it give a largely intrusive noise to be heard from your pocket or purse.

So two things about this: one why the watch and not the phone, and the other, why not the glasses? The watch form factor is, to me, a more accessible interface to serve as a interactive companion. As I suggested, pulling it out of the pocket, turning it on, going through the security check (even just my fingerprint), adds more of an overhead than I necessarily want.  If I can have something less intrusive, even as part of a system and not fully capable on it’s own, that’s OK.  Why not glasses? I guess it’s just that they seem more unnatural.  I am accustomed to having information on my wrist, and while I wear glasses, I want them to be invisible to me.  I would love to have a heads-up display at times, but all the time would seem to get annoying. I’ll stretch and suggest that the empirical result that most folks have stopped wearing them most of the time bears up my story.

Why not a ring, or a pendant, or?  A ring seems to have too small an interface area.  A pendant isn’t easily observable. On my wrist is easy for a glance (hence, watches).  Why not a whole forearm console?  If I need that much interface, I can always pull out my phone.  Or jump to my tablet. Maybe I will eventually will want an iBracer, but I’m not yet convinced. A forearm holster for my iPhone?  Hmmm…maybe too geeky.

So, reflecting on all this, it appears I’m thinking about tradeoffs of utility versus intrusion.  A wrist devices seems to fit a sweet spot in an ecosystem of tech for the quick glance, the pocket access, and then various tradeoffs of size and weight for a real productivity between tablets and laptops.

Of course, the real issue is whether there’s sufficient information available through the watch that it makes a value proposition. Is there enough that’s easy to get to that doesn’t require a phone?  Check the temperature?  Take a (voice) note?  Get a reminder, take a call, check your location? My instinct is that there is.  There are times I’d be happy to not have to take my phone (to the store, to a party) if I could take calls on my wrist, do minimal note taking and checking, and navigating.  For the business perspective, also have performance support whether push or pull.  I don’t see it for courses, but for just-in-time…  And contextual.

This is all just thinking aloud at this point.  I’m contemplating the iWatch but don’t have enough information as of yet.  And I may not feel the benefits outweigh the costs. We’ll see.

31 December 2014

Reflections on 15 years

Clark @ 7:32 am

For Inside Learning & Technologies 50th edition, a number of us were asked to provide reflections on what has changed over the past 15 years.  This was pretty much the period in which I’d returned to the US and took up with what was kind of a startup and led to my life as a consultant.  As an end of year piece, I have permission to post that article here:

15 years ago, I had just taken a step away from academia and government-sponsored initiatives to a new position leading a team in what was effectively a startup. I was excited about the prospect of taking the latest learning science to the needs of the corporate world. My thoughts were along the lines of “here, where we have money for meaningful initiatives, surely we can do something spectacular”. And it turns out that the answer is both yes and no.

The technology we had then was pretty powerful, and that has only increased in the past 15 years. We had software that let us leverage the power of the internet, and reasonable processing power in our computers. The Palm Pilot had already made mobile a possibility as well. So the technology was no longer a barrier, even then.

And what amazing developments we have seen! The ability to create rendered worlds accessible through a dedicated application and now just a browser is truly an impressive capability. Regardless of whether we overestimated the value proposition, it is still quite the technology feat. And similarly, the ability to communicate via voice and video allows us to connect people in ways once only dreamed of.

We also have rich new ways to interact from microblogs to wikis (collaborative documents). These capabilities are improved by transcending proximity and synchronicity. We can work together without worrying about where the solution is hosted, or where our colleagues are located. Social media allow us to tap into the power of people working together.

The improvements in mobile capabilities are also worth noting. We have gone from hype to hyphens, where a limited monochrome handheld has given way to powerful high-resolution full-color multi-channel always-connected sensor-rich devices. We can pretty much deliver anything anywhere we want, and that fulfills Arthur C. Clarke’s famous proposition that a truly advanced technology is indistinguishable from magic.

Coupled with our technological improvements are advances in our understanding of how we think, work, and learn. We now have recognition about how we act in the world, about how we work with others, and how we best learn. We have information age understandings that illustrate why industrial age methods are not appropriate.

It is not truly new, but reaching mainstream awareness in the last decade and more is the recognition that the model of our thinking as formal and logical is being updated. While we can work in such ways, it is the exception rather than the rule. Such thinking is effortful and it turns out both that we avoid it and there is a limit to how much deep thinking one can do in a day. Instead, we use our intuition beyond where we should, and while this is generally okay, it helps to understand our limitations and design around them.

There is also a spreading awareness of how much our thinking is externalized in the world, and how much we use technology to support us being effective. We have recognized the power of external support for thinking, through tools such as checklists and wizards. We do this pretty naturally, and the benefits from good design of technology greatly facilitate our ability to think.

There is also recognition that the model of individual innovation is broken, and that working together is far superior to working alone. The notion of the lone genius disappearing and coming back with the answer has been replaced by iterations on top of previous work by teams. When people work together in effective ways, in a supportive environment, the outcomes will be better. While this is not easy to effect in many circumstances, we know the practices and culture elements we need, and it is our commitment to get there, not our understanding, that is the barrier.

Finally, our approaches to learning are better informed now. We know that being emotionally engaged is a valued component in moving to learning experience design. We understand the role of models in supporting more flexible performance. We also have evidence of the value of performing in context. It is not news that information dump and knowledge test do not lead to meaningful skill acquisition, and it is increasingly clear that meaningful practice can. It is also increasingly clear that, as things move faster, meaningful skills – the ability to make better decisions – is what is going to provide the sustainable differentiator for organizations.

So imagine my dismay in finding that the approaches we are using in organizations are largely still rooted in approaches from yesteryear. While we have had rich technology opportunities to combine with our enlightened understanding, that is not what we are seeing. What we see is still expectations that it is done in-the-head, top-down, with information dump and meaningless assessment that is not tied to organizational outcomes. And while it is not working, demonstrably, there seems little impetus to change.

Truly, there has been little change in our underlying models in 15 years. While the technology is flashier, the buzz words have mutated, and some of the faces have changed, we are still following myths like learning styles and generational differences, we are still using ‘spray and pray’ methods in learning, we are still not taking on performance support and social learning, and perhaps most distressingly, we are still not measuring what matters.

Sure, the reasons are complex. There are lots of examples of the old approaches, the tools and practices are aligned with bad learning practices, the shared metrics reflect efficiency instead of effectiveness, … the list goes on. Yet a learning & development (L&D) unit unengaged with the business units it supports is not sustainable, and consequently the lack of change is unjustifiable.

And the need is now more than ever. The rate of change is increasing, and organizations now have more need to not just be effective, but they have to become agile. There is no longer time to plan, prepare, and execute, the need is to continually adapt. Organizations need to learn faster than the competition.

The opportunities are big. The critical component for organizations to thrive is to couple optimal execution (the result of training and performance support) with continual innovation (which does not come from training). Instead, imagine an L&D unit that is working with business units to drive interventions that affect key KPIs. Consider an L&D unit that is responsible for facilitating the interactions that are leading to new solutions, new products and services, and better relationships with customers. That is the L&D we need to see!

The path forward is not easy but it is systematic and doable. A vision of a ‘performance ecosystem’ – a rich suite of tools to support success that surround the performer and are aligned with how they think, work, and learn – provides an endpoint to start towards. Every organization’s path will be different, but a good start is to start doing formal learning right, begin looking at performance support, and commence working on the social media infrastructure.

An associated focus is building a meaningful infrastructure (hint: one all-singing all-dancing LMS is not the answer). A strategy to get there is a companion effort. And, ultimately a learning culture will be necessitated. Yet these components are not just a necessary component for L&D, they are the necessary components for a successful organization, one that can be agile enough to adapt to the increasing rate of change we are facing.

And here is the first step: L&D has to become a learning organization. Mantras like ‘work out loud’, ‘fail fast’, and ‘reflect’ have to become part of the L&D culture. L&D has to start experimenting and learning from the experiments. Let us ensure that the past 15 years are a hibernation we emerge from, not the beginning of the end.

Here’s to change for the better.  May 2015 be the best year yet!

9 December 2014

My thoughts on tech and training

Clark @ 8:27 am

The eLearning Guild,  in queuing up interest in their Learning Solutions/Performance Ecosystem conference, asked for some thoughts on the role of technology and training.  And, of course, I obliged.  You can see them here.

In short, I said that technology can augment what we already do, serving to fill in gaps between what we desired and what we could deliver, and it also gave us some transformative capabilities.  That is, we can make the face to face time more effective, extend the learning beyond the classroom, and move the classroom beyond the physical space.

The real key, a theme I find myself thumping more and more often, is that we can’t use technology in ineffective ways. We need to use technology in ways that align with how we think, work, and learn.  And that’s all too rare.  We can do amazing things, if: we muster the will and resources, do the due diligence on what would be a principled approach, and then do the cycles of develop and iteration to get us to where the solution is working as it should.

Again, the full thoughts can be found on their blog.

 

31 October 2014

Belinda Parmar #DevLearn Keynote Mindmap

Clark @ 11:38 am

Belinda Parmar addressed the critical question of women in tech in a poignant way, pointing out that the small stuff is important: language, imagery, context. She concluded with small actions including new job description language and better female involvement in product development.

IMG_0156.JPG

29 October 2014

Neil deGrasse Tyson #DevLearn Keynote Mindmap

Clark @ 9:54 am

Neil deGrasse Tyson opened this year’s DevLearn conference. A clear crowd favorite, folks lined up to get in (despite the huge room). In a engaging, funny, and poignant talk, he made a great case for science and learning.

IMG_0153.JPG

Next Page »

Powered by WordPress