Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

27 March 2013

Reflections on Experience

Clark @ 6:26 AM

The API formerly known as Tin Can provides a consistent way to report individual activity. With the simple syntax of <who> <did> <this> (e.g. <Clark Quinn> <wrote> <a blog post>), systems can generate records across a wide variety of activity, creating a rich base of data to mine for contingencies that lead to success. While machine learning and analytics is one opportunity, there’s another, which is having people look at the data.  And one person in particular.

As background, I was fortunate back in 1989 to get a post-doctoral fellowship to study at the Learning Research & Development Center at the University of Pittsburgh.  One of the projects that had been developed was a series of intelligent tutoring systems (ITS) that shared an unusual characteristic.  Unlike most ITS, which tutor on the domain, these three systems crossed domains (geometric optics, microeconomics, and electrical circuits, if memory serves) but the tutoring was about the systematicity in exploration. That is, the system tracked and intervened on whether you were varying one variable at a time, ensuring your data sampling was across a broad enough range of data points, etc.  This reflected work done by the Valerie Shute and Jeffery Bonar some years before on your learning strategies.

I had the further benefit to work under the guidance of Leona Schauble, a very insightful researcher.  One of her projects was with Kalyani Raghavan on working to make the learners’ paths in these systems visible and comprehensible to the learner, and they created the Dynamic And Reflective Notation (DARN, heh!) to capture and represent those paths.

Fast forward to today, and one of the big opportunities I see is for performers to reflect on their own paths of action. The granularity at which Tin Can can capture data, and systems might be instrumented to generate data, could be too small to be useful, so some way of aggregating activity to a reasonable level would be necessary, but looking at one’s own paths, and perhaps others, would be a useful way to reflect on process and look for opportunities to improve.

Reflection on action is a powerful learning and improvement process, but recollection isn’t as good as actual recording.  The power of working out loud is really seen when those tracks are left for examination.  The API has the opportunity to support more than system mining (“oh look, everyone who has this responsibility who touches this resource does way better than those who don’t”). Not that there’s anything wrong  with that, but having performers do it tois a great opportunity not to be missed. As the work on protein folding has found, some patterns are better for computer solution, and others for human. We’d be remiss if we didn’t explore the opportunities to be found.

25 March 2013

Email a ’rounding error’?

Clark @ 6:23 AM

“Education over the Internet is going to be so big it is going to make email usage look like a rounding error.” – John Chambers, CEO of Cisco

This bold pronouncement of John Chambers a number of years ago hasn’t really played out as promised.  I would argue that elearning has begun to grow, what with the rise of online education and the recent interest in MOOCs.  And if we take a performance ecosystem view of elearning, including performance support and social, we can begin to think much more broadly about the relationship.  I don’t think John thought of self-learning via Google or YouTube, or learning together via LinkedIn and Twitter, but if we give him the benefit of the doubt, we can begin to think that elearning may be of a substantial bulk in proportion to email, though not yet rounding error size.

However, I want to consider another elearning view that could propose such a relationship.  If we take a performance ecosystem view of mobile, we may well have that sort of ratio.  Think about it, mobile can claim large numbers around:

  • people with mobile phones who have no email or real internet, but voice and text messages give them reach
  • using and/or sharing photos or videos for help
  • accessing the internet through their phones to learn and perform
  • using apps to help them do things, calculating things, supporting their performance
  • connecting to social networks on a variety of platforms: FaceBook, Twitter, LinkedIn, …
  • people using context-sensitive apps to solve problems where they are and tell them what’s around
  • the growth in all the above

If we consider all those (and using mobile devices for email :) we actually come up with a pretty big number!  We use mobile personally to learn and perform better in increasing ways, and we’ll start doing it more and more for work as well.  In this way, mobile learning performance is becoming the massive shift that will make email seem like a rounding error. And that is big.

 

21 March 2013

Signs of hope?

Clark @ 6:32 AM

Attending the SolutionsFest at the Learning Solutions conference last week, despite my earlier rant, I saw signs of hope.  A quick spot check revealed a number of approaches going above and beyond.

One direction I was pleased to see was a move to more performance support. I saw several solutions that were more focused on providing information as needed, or letting you  navigate to it, rather than thinking everything had to be ‘in the head’. This is definitely a promising sign.  They’re not hard to build, either.

The second promising sign was the use of scenarios. Several different solutions were focused on putting learners into contexts and asking them to perform. This is definitely the direction we need to see more of.  And, again, it’s not that hard to do!

One interesting takeaway was that the innovative solutions seemed to come more from small or internal groups rather than the big teams.  Which only reinforces my overall concern with the industry as a whole.  I wonder if it’s easier for small teams to adapt to advice of folks like Michael Allen (no more boring elearning), Julie Dirksen (Design for How People Learn) and Will Thalheimer than it is for big teams, who not only have to change processes but also educate their customers.

This is an unscientific sample; I did a quick tour of the displays, but couldn’t see all as there were some that were just too crowded.  I also looked at them relatively briefly and didn’t make comprehensive notes, so this is just a read of my state of mind as I finished.  It doesn’t ameliorate the overall concern, but it does provide some hope that things are changing in small pockets.

18 March 2013

Yes, you do have to change

Clark @ 6:22 AM

Of late, I’ve seen a disturbing trend.  Not only are the purveyors of existing solutions preaching caution and steadiness, but it even seems like some of the  ‘names’ of the field are talking in ways that make it easy to think that the industry is largely doing ok.  And I do not understand this, because it’s demonstrably wrong.  The elearning industry, and the broader learning industry, is severely underperforming the potential (and I’m being diplomatic).

We know what leads to effective learning outcomes.  And we’ve known it for decades (just because MOOCs are new doesn’t mean their pedagogies are): clear models, annotated examples, and most importantly deep and meaningful practice focused on significant skill shifts (let alone addressing the emotional side of the equation).  Learners need to perform, repeatedly, with guidance, over more and more complex contexts until they achieve the level of performance they need.  However, that’s no where near what we’re seeing.

What we see are knowledge dump/test tarted up with trivial interactions.  People will pass a test, but they will not have achieved the ability to affect any meaningful business outcomes.  If it’s knowledge that performers need, create a job aid, not a ‘spray and pray’.  And facilitate people in self-helping.  As things get more complex and moving faster, there’s no way everything can be kept up with by new course development, even if it were a useful approach, and mostly it’s not.

We’re even measuring the wrong things.  Cost per seat hour is secondary (at best).  That’s ‘fine-tuning’, not the core issue.  What’s primary is business impact.  Are you measurably improving key performance indicators as outcomes?

And that’s assuming courses are all the learning unit should be doing, but increasingly we recognize that that’s only a small proportion of what makes important business outcomes, and increasingly we’re recognizing that the role needs to move from instructional designer to performance consultant.  More emphasis can and should be on providing performance resources and facilitating useful interactions rather than creating courses.  Think performance support first, and communities of practice, only resorting to courses as a last result.

Tools that make turning Powerpoint presentations into page-turning content aren’t going to fix this, nor are tools that provide prettified drill-and-kill, nor ones that let you host and track courses.  There are places for those, but they’re not the bulk of the opportunity, and shouldn’t be the dominant solutions we see. There’s so much more: deeply engaging scenarios and simulation-driven interactions on the formal side, powerful job aid tools for performance support (particularly mobile), coaching and mentoring as a better solution than courses in many (most) cases, performer-focused portals of tools, underlying powerful content management suites, and rich social environments to support performers making each other smarter and more effective.

I’m not quite sure why the easy tools dominate the expo halls, except perhaps because anyone can build them.  More worrisome is that it can let designers off the hook in terms of thinking deeper.  We need to focus first on rich outcomes, and put the tools secondary.

While the industry congratulates itself on how they make use of the latest technology, the lack of impact is leading a drive to irrelevancy.  Learners tolerate the courses, at best.  Operations groups and others are beginning to focus on the performance solutions available.  Executives are beginning to hear a message that the old approach is a waste of resources.

Hiding your head in the sand isn’t going to cut it. The industry is going to have to change.  And that means you will have to change.  But you’re a professional in learning, right?  So lead the way.  The best way to change is to take that first step.

 

15 March 2013

Yvonne Camus #LSCon Keynote Mindmap

Clark @ 9:19 AM

Yvonne Camus closed the conference with a stirring talk on success under extreme circumstances as an Eco-challenge winner.

20130315-122054.jpg

14 March 2013

Daniel Coyle #LSCon Keynote Mindmap

Clark @ 7:12 AM

Daniel Coyle gave a wonderfully funny, passionate, and poignant keynote, talking about what leads to top performance. Naturally, I was thrilled to hear him tout the principles that I suggest make games such a powerful learning environment: challenge, tight feedback, and large amounts of engaging practice. With compelling stories to illustrate his points, he balanced humor and emotional impact to sell a powerful plea for better learning.

20130314-101527.jpg

13 March 2013

Aaron Dignan #LSCon Keynote Mindmap

Clark @ 2:01 PM

In a clever talk, Aaron Dignan used game theory to talk about how to improve the workplace.

20130313-170311.jpg

Robert Ballard #LSCon Keynote Mindmap

Clark @ 6:47 AM

Robert Ballard gave a personal and inspiring tale of exploring the world’s oceans and using technology to broaden reach.

20130313-094950.jpg

11 March 2013

Barrier to scale?

Clark @ 4:03 AM

I was part of a meeting about online learning for an institution, and something became clear to me.  We were discussing MOOCs (naturally, isn’t everyone?), and the opportunities for delivering quality learning online.  And that’s where I saw a conflict that suggested a fundamental barrier to scale.

When I think about quality learning, the core of it is, to me, about the learning activity or experience.  And that means meaningful problems with challenge and relevance, more closely resembling those found in the real world than ones typically taught in schools and training.  There’s more.

The xMOOCs that I’ve seen have a good focus on quality assessment aligned to the learning goal, but there’s a caveat. Their learning goals have largely been about cognitive skills, about how to ‘do’. And I’m a big fan of focusing on ‘do’, not know.  But I recognize there’s more, there’s to ‘be’.  That is, even if you have acquired skills in something like AI programming, that doesn’t mean you’re ready to be employed as an AI programmer. There’s much more.  For instance, how to keep yourself up to date, how to work well with others, what are the nature of AI projects, etc.

It also came up that when polled, a learned committee suggested top things to learn were to lead, to work well on a team, communicate, etc.  These are almost never developed by working on abstract problems.  In fact, I’d suggest that the best activities are meaningful, challenging, and collaborative.  The power of social learning, of working together to receive other viewpoints and negotiate a shared understanding, and creating a unique response to the challenge, is arguably the best way to learn.

Consequently, it occurs to me, that you simply cannot make a quality learning experience that can be auto-assessed.  It needs to be rich, and mentored, scaffolded, and evaluated.  Which means that you have real trouble scaling a quality learning experience.  Even with peer assessment, there’s some need for human intervention with every group’s process and product.  Let alone generating the beneficial meta-learning aspects that could come from this.

So, while there are real values to be developed from MOOCs, like developing perhaps some foundation knowledge and skills, ultimately a valuable education will have to incorporate some mechanism to handle meaningful activities to develop the desirable deep understanding.  A tiered model, perhaps?  This is still embryonic, but it seems to me that this is a necessary step on the way to a real education in a domain.

7 March 2013

Leadership for Complexity

Clark @ 5:22 AM

The other meme from the retreat event last weekend was the notion of leadership for complexity.  A few of us decided to workshop a topic around performance, leadership, and technology.  We realized technology was only a means to an end, and the real issue was how to move organizations to optimal performance (e.g. the Coherent Organization).

We talked through how things are moving from complicated to complex (and how important it is to recognize the difference), and that organizations need to receive the wake-up call and start moving forward.  Using the Cynefin model, the value will not come from the simple (which should be automated) nor the complicated (which can be outsourced), but from dealing with the complex (and chaotic).  This won’t come from training and top down management. As I’ve said before, optimal execution will only be the cost of entry, and the differentiator (and hence the value) will be continual evaluation. And that comes from a creative and collaborative workforce.  The issue really is to recognize the need to seize new directions, and then execute the change.

One concern was whether we were talking evolution or revolution.  Rather than taking an either or, I was inclined to think that you needed revolutionary thinking (I like Kathy Sierra’s take on this), but that you fundamentally can’t revolutionize an organization short of total replacement (“blood on the streets” as one colleague gleefully put it :).  I reckoned a committed change initiative to the place the revolutionary thinking pointed was what was needed.

The issue, then, is the vision and guidance to get there.  What’s needed is leadership that can lead the organization to be able to leverage complexity for success.  This will be about equipping and empowering people to work together on shared goals: sharing, commenting, contributing, collaborating, and more.  It will be inherently experimental in an ongoing way.

What that means practically is an exercise I (and we) are continually working on, but we’ve coalesced on the top-level frameworks to form the basis of tools, and what’s needed are some organizations to co-develop the solutions.  Design-based research] if you will. So who’s up for working on the path to the future?

#itashare

Next Page »

Powered by WordPress