Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

10 April 2014

Can we jumpstart new tech usage?

Clark @ 7:20 am

It’s a well-known phenomena that new technologies get used in the same ways as old technologies until their new capabilities emerge.  And this is understandable, if a little disappointing.  The question is, can we do better?  I’d certainly like to believe so!  And a conversation on twitter led me to try to make the case.

So, to start with, you have to understand the concept of affordances, at least at a simple level.  The notion is that objects in the world support certain action owing to the innate characteristics of the object (flat horizontal surfaces support placing things on them, levers afford pushing and pulling, etc).  Similarly, interface objects can imply their capabilities (buttons for clicking, sliders for sliding). They can be conveyed by visual similarity to familiar real-world objects, or be completely new (e.g. a cursor).

One of the important concepts is whether the affordance is ‘hidden’ or not.  So, for instance, on iOS you can have meaningful differences between one, two, three, and even four-fingered swipes.  Unless someone tells you about it, however, or you discover it randomly (unlikely), you’re not likely to know it.  And there’re now so many that they’re hard to remember.  There are many deep arguments about affordances, and they’re likely important but they can seem like ‘angels dancing on the head of a pin’ arguments, so I’ll leave it at this.

The point here being that technologies have affordances.  So, for example, email allows you to transmit text communications asynchronously to a set group of recipients.  And the question is, can we anticipate and leverage the properties and skip (or minimize) the stumbling beginnings.

Let me use an example. Remember the Virtual Worlds bubble?  Around 2003, immersive learning environments were emerging (one of my former bosses went to work for a company). And around 2006-2009 they were quite the coming thing, and there was a lot of excitement that they were going to be the solution.  Everyone would be using them to conduct business, and folks would work from desktops connecting to everyone else.  Let me ask: where are they now?

The Gartner Hype Cycle talks about the ‘Peak of Inflated Expectations’ and then the ‘Trough of Disillusionment’, followed by the ‘Slope of Enlightenment’ until you reach the ‘Plateau of Productivity’ (such vibrant language!).  And what I want to suggest is that the slope up is where we realize the real meaningful affordances that the technology provides.

So I tried to document the affordances and figure out what the core capabilities were.  It seemed that Virtual Worlds really supported two main points: being inherently 3D and being social.  Which are important components, no argument. On the other hand, they had two types of overhead, the cognitive load of learning them, and the technological load of supporting them. Which means that their natural niche would be where 3D would be inherently valuable (e.g. spatial models or settings, such as refineries where you wanted track flows), and where social would also be critical (e.g. mentoring).  Otherwise there were lower-cost ways to do either one alone.


Thus, my prediction would be that those would be the types of applications that’d be seen after the bubble burst and we’d traversed the trough.  And, as far as I know, I got it right.  Similarly, with mobile, I tried to find the core opportunities.  And this led to the models in the Designing mLearning book.

Of course, there’s a catch.  I note that my understanding of the capabilities of tablets has evolved, for instance. Heck, if I could accurately predict all the capabilities and uses of a technology, I would be running venture capital.  That said, I think that I can, and more importantly, we can, make a good initial stab.  Sure, we’ll miss some things (I’m not sure I could’ve predicted the boon that Twitter has become), but I think we can do better than we have.  That’s my claim, and I’m sticking to it (until proved wrong, at least ;).

2 April 2014

It’s (almost) out!

Clark @ 6:42 am

My latest tome, Revolutionize Learning & Development: Performance and Innovation Strategy for the Information Age is out.  Well, sort of.  What I mean is that it’s now available on Amazon for pre-order.  Actually, it’s been for a while, but I wanted to wait until there was some there there, and now there’s the ‘look inside’ stuff so you can see the cover, back cover (with endorsements!), table of contents, sample pages, and more.  Ok, so I’m excited!

RLnDCoverSmallWhat I’ve tried to do is make the case for dragging L&D into the 21st Century, and then provide an onramp.  As I’ve been saying, my short take is that L&D isn’t doing what it could and should be doing, and what it is doing, it is doing badly.  But I don’t believe complaining alone is particularly helpful, so I’m trying to put in place what I think will help as well.  The major components are:

  • what’s wrong (you can’t change until you admit the problem :)
  • what we know about how we think, work, and learn that we aren’t accounting for
  • what it would look like if we were doing it right
  • ways forward

By itself, it’s not the whole answer, for several reasons. First, it can’t be. I can’t know all the different situations you face, so I can’t have a roadmap forward for everyone. Instead, what I supposed you could think of is that it’s a guidebook (stretching metaphors), showing suggestions that you’ll have to sequence into your own path.  Second, we don’t know all yet. We’re still exploring many of these areas.  For example, culture change is not a recipe, it’s a process.  Third, I’m not sure any one person can know all the answers in such a big field. So, fourth, to practice what I’m preaching, there should be a community pushing this, creating the answers together.

A couple of things on that last part, the first one is a request.  The community will need to be in place by the time the book is shipping.  The question is where to host it.  I don’t intend to build a separate community for it on the book site, as there are plenty of places to do this.  Google groups, Yahoo groups, LinkedIn…the list goes on. It can’t be proprietary (e.g. you have to be a paid member to play).  Ideally it’d have collaborative tools to create resources, but I reckon that can be accommodated via links.  What do you folks think would be a good choice?

The second part of the community bit is that I’m very grateful to many people who’ve helped or contributed.  Practitioner friends and colleagues provided the five case studies I’ve the pleasure to host.  Two pioneers shared their thoughts.  The folks at ASTD have been great collaborators in both helping me with resources, and in helping me get the message out.  A number of other friends and colleagues took the time to read an early version and write endorsements.  And I’ve learned together with so many of you by attending events together, hearing you speak, reading your writings, and having you provide feedback on my thoughts via talking or writing to me after hearing me speak or commenting on my scribblings here.

The book isn’t perfect, because I have thought of a number of ways it could be improved since I provided the manuscript, but I have stuck to the mantra that at some point it’s better out than still being polished. This book came from frustration that we can be doing so much better, and we’re not. I didn’t grow up thinking “I’m going to be a revolutionary”, but I can’t not see what I see and not say something.  We can be doing so much better than we are. And so I had to be willing to just get the word out, imperfect.  It wasn’t (isn’t) clear that I’m the best person to call this out, but someone needs to!

That said, I have worked really hard to have the right pieces in place.  I’ve collected and integrated what I think are the necessary frameworks, provided case studies and a workplace scenario, and some tools to work forward.   I have done my best to provide a short and cogent kickstart to moving forward.  

Just to let you know that I’m starting my push.  I’ll be presenting on the book at ASTD’s ICE conference, and doing some webinars. Bryan Austin of GameOn Learning interviewed me on my thoughts in this direction.  I do believe in the message, and that it at least needs to be heard.  I think it’s really the necessary message for L&D (in it, you’ll find out why I’m suggesting we need to shift to P&D!).  Forewarned!  I look forward to your feedback.

13 March 2014

Smarts: content or system?

Clark @ 7:07 am

I wrote up my visit to the Intelligent Content conference for eLearnMag, but one topic I didn’t raise was an unanswered question I raised during the conference: should the ‘smarts’ be in the content or the system?  Which is the best way to adapt?

Now the obvious answer is the system. Making content smart would require a bunch of additional elements to the content. There would have to be logic to sense conditions and make changes. Simple adaptation could be built in, but it would be hard to revise them if you had new information.  Having  well-defined content and letting the system use contextual information to choose the content is the typical system used in the industry.

Let’s consider the alternative for a minute, however.  If the content were adaptive, it wouldn’t matter what system it was running on, it would deliver the same capability.  For example you could run under SCORM and still have the smart behavior.  And you can’t adapt with a system if you’ve monolithic learning objects that contain the whole experience.

And, at the time I led a team building an adaptive learning engine, we did see adaptive content. However, we chose to have more finely granulated content, down to individual practice items, separate examples, concepts, and more.  Even our introductions were going to have separate elements.  We believed that if we had finely articulated content models, and rich tagging, we could change the rules that were running in the system, and get new adaptive behaviors across all the content with only requiring new rules in one place.

And if new tags were needed on the content objects, we could write programs to add necessary tags rather than have to hand-address every object.  In the smart content approach, if you want to change the adaptation, you’re getting into the internals of every content piece.

We thought we had it right, and I still think that, for the reasons above, smart systems are the way to go, coupled with semantically tagged and well-delineated content. Happy to hear alternate proposals!


11 February 2014

Smarter Than We Think Review

Clark @ 6:35 am

In Smarter Than We Think, Clive Thompson makes the case that not only is our technology not making us stupider, but that we have been using external support for our cognition from our earliest days. Moreover, this is a good thing. Well, if we do so consciously and with good intent.

He starts by telling the story of how – as our chess competitions have moved from man against man, through man against computer, to man & computer against man & computer – the quality of play has fundamentally changed and improved. He ultimately recites how the outcomes of the combination of man and machine produce fundamentally new insights.

He goes on to cover a wide variety of phenomena. These include augmenting our imperfect memory, the benefits of thinking out loud, the gains from understanding different media properties, the changes when information is to hand, the opportunities unleashed by crowd-sourcing, the implications for education, and the changes when you have continual connection to others. This is not presented as an unvarnished panacea, but the potential and real problems are covered.

The story is ripely illustrated with many stories culled from many interviews with people well-known and obscure, but all with important perspectives. We hear of impacts both personal, national, and societal. This is a relatively new book, and while we don’t hear of Edward Snowden or Bradley Manning, their shadows fall on the material. On the other hand we hear of triumphs for individuals and movements.

I have argued before about how we can, and should, augment our pattern-matching capability with the perfect memory and complex calculation that digital technology provides, and separately how social extends our cognition. Thompson takes this further, integrating the two, extending the story to media and networked capabilities. A good extension and a worthwhile read.

28 January 2014

Starting trouble

Clark @ 7:04 am

This seems to be my year of making trouble, and one of the ways is talking about what L&D is and isn’t doing. As a consequence of the forthcoming book (no cover comps yet nor ability to preorder), I’ve had to put my thoughts together, and I’m giving the preliminary version next Thurs, February 6, at 11AM PT, 2PM ET as a webinar for ASTD.

The gist is that there are a number of changes L&D is not accommodating: changes in how business should be run, changes in understanding how we think and perform, and even our understanding of learning has advanced (at least beyond the point that most of our corporate approaches seem to recognize).  Most L&D really seems stuck in the industrial age, and yet we’re in the information age.

And this just doesn’t make sense!  We should be the most eager adopters of technology, staying on top of new developments and looking for their potential to support our organizations.  We should be leading the charge in being learning organizations: following the business precepts of experimenting regularly, failing fast, and reflecting on the outcomes.  Yet that doesn’t reflect what the we’re seeing.

To move forward, we need to do more. To address business needs, we need to consider performance support and social networks. In fact, I argue that these should be our first line of defense, and courses should only be used when a significant skill shift is required.  We should be leveraging technology more effectively, looking at semantics and content architectures as well as mobile and contextual opportunities.  And we need to be getting strategic about how we’re helping the organization and evaluating not just efficiency but our effectiveness and impact.

This is just the start of a rolling series of activities trying to inject a sense of urgency into L&D (change management step 1).  While this will be covered in print, in sessions starting with last week’s TK14, and continuing through Learning Solutions and ICE, here’s a chance to get a headstart.  Look for a followup somewhere around April.  Hope you’ll join us!

24 January 2014

Kate Hartman #ASTDTK14 Keynote Mindmap

Clark @ 11:58 am

At ASTD’s TechKnowledge Conference, Kate Hartman talked about wearable computing, showing examples of her idiosyncratic projects connecting people to the world, each other, and themselves.


21 January 2014

Mac memories

Clark @ 6:52 am

This year is the 30th anniversary of the Macintosh, and my newspaper asked for memories.  I’ll point them to this post ;).

As context, I was programming for the educational computer game company, DesignWare.  DesignWare had started out doing computer games to accompany K12 textbooks, but I (not alone) had been arguing about heading into the home market, and happened to run into Bill Bowman and David Seuss at a computer conference, who’d started Spinnaker to sell education software to the home market, and were looking for companies that could develop product. I told them to contact my CEO, and as a reward I got to do the first joint title, FaceMaker. When DesignWare created it’s own titles, I got to do Creature Creator and Spellicopter before I headed off to graduate school for my Ph.D. in what ended up being, effectively, applied cognitive science.

While I was at DesignWare, I had been an groupie of Artificial Intelligence and a nerd around all things cool in computers, so I was a fan of the work going on at Xerox Palo Alto Research Center (aka Parc), and followed along in Byte magazine. (I confess that, at the time, I was a bit young to have been aware of the mother of all demos by Doug Engelbart and the inspiration of the Parc work.)  So I lusted after bitmap screens and mice, and the Lisa (the Mac predecessor).

My Ph.D. advisor, Donald Norman, had written about cognitive engineering and the research lab I joined was very keen on interface design (leading to Don’s first mass-market and must-read book, The Psychology of Everyday Things, subsequently titled The Design of Everyday Things, and a compendium of writings call User-Centered System Design).  He was, naturally, advising Apple.  So while I dabbled in meta-learning, I was right there at the heart of thinking around interface design.

Naturally, if you cared about interface design, had designed engaging graphic interfaces, and had watched how badly the IBM PC botched the introduction of the work computer, you really wanted the Macintosh.  Command lines were for those who didn’t know better.  When the Macintosh first came out, however, I couldn’t justify the cost.  I had access to Unix machines and the power of the ARPANET.  (The reason I was originally ho-hum about the internet was that I’d been playing with Gopher and WAIS and USENET for years!)

I finally justified the purchase of a Mac II to write my PhD thesis on.  I used Microsoft Word, and with the styles option was able to meet the rigorous requirements of the library for theses without having to pay someone to type it for me (a major victory in the small battles of academia!).  I’ve been on a Macintosh ever since, and have survived the glories of iMacs and Duos (and the less-than stellar Performa).  And I’ve written books, created presentations, and brainstormed through diagrams in ways I just haven’t been able to on other platforms.  My family is now also on Macs.  When the alternative can be couched as the triumph of marketing over matter, there really has been little other choice.  Happy 30th!

15 January 2014

Intelligent Content

Clark @ 6:45 am

I’ve been on the content rant before, talking about the need to structure content into models, and the benefits of tagging.  Now, there’s something you can do about it.

You have to understand that folks who do content as if their business depended on it, e.g. web marketers, have a level of sophistication that elearning (and all elearning: performance support, social, etc) would do well to adopt. The power of leveraging content by description, not by link, is the basis for adaptive, custom, personalized experiences.  But it takes a lot of knowledge and work, and a strategy.

You’ve seen it in Netflix and Amazon recommendations, and sites that support powerful searches.  We can and should be doing this for learning and performance, whether pull or push.  But where do you learn?

One of the people I follow is Scott Abel, the Content Wrangler.  And he’s put together the Intelligent Content Conference that will give you the opportunities you need to get on top of this. This isn’t necessarily for the independent instructional designer, but if you do elearning as a business, whether a publisher or custom content house, or if you’re looking for the next level of technical sophistication, this is something you really should have on your radar.

Full disclosure: I will be on a press pass to attend, but they didn’t reach out to me. I reached out to them because I wanted a way to attend. Because I know this is important enough to find a way to hear more.  I don’t have a set company I work for, so if I want to know this stuff to be able to help people take advantage of it, I have to earn my keep (in this case, by writing an article afterward).  I only feel it fair, however, that if I think it’s important enough to finagle a way to attend, I should at least let you know about it.

(And, fair warning, if you do lob something at me, expect to join the many who have received a firm refusal, on principle. I’m not in the PR business.  As I state in my boilerplate response: “I deliberately ignore what comes unsolicited, and instead am triggered by what comes through my network: Twitter, Facebook, LinkedIn, Skype, etc.”.  Save us both time and don’t bother.)

1 January 2014

2014 Directions

Clark @ 6:49 am

In addition to time for reflection on the past, it’s also time to look forward.  A number of things are already in the queue, and it’s also time to see what I expect and hope for.

The events already queued up include:

ASTD’s TechKnowledge 2014, January 22-24 in Las Vegas, where I’ll be talking on aligning L&D with organizational needs (hint hint).

NexLearn’s Immersive Learning University conference, January 27-30 in Charleston, SC, where I’ll be talking about the design of immersive learning experiences.

Training 2014, in San Diego February 2 – 5, where I’ll be running a workshop on advanced instructional design, and talking on learning myths.

The eLearning Guild’s Learning Solutions will be in Orlando March 17-21, where I’ll be running a 1 day elearning strategy workshop, as well as offering a session on informal elearning.

That’s all that is queued up so far, but stay tuned. And, of course, if you need someone to speak…

You can tell by the topics I’m speaking on as to what I think are going to be, or should be, the hot issues this year.  And I’ll definitely be causing some trouble.  Several areas I think are important and I hope that there’ll be some traction:

Obviously, I think it’s past time to be thinking mobile, and I should have a chapter on the topic in the forthcoming ASTD Handbook Ed.2.  Which also is seen in my recent chapter on the topic in the Really Useful eLearning Instruction Manual.  I think this is only going to get more important, going forward, as our tools catch up.  It’s not like the devices aren’t already out there!

A second area I’m surprised we still have to worry about it good elearning design. I’m beginning to see more evidence that people are finally realizing that knowledge dump/test is a waste of time and money. I’m also part of a forthcoming effort to address it, which will also manifest in the aforementioned second edition of the ASTD Handbook.

I’m quite convinced that L&D has a bigger purpose than we’re seeing, which is naturally the topic of my next book. I think that the writing is on the wall, and what is needed is some solid grounding in important concepts and a path forward.  The core point is that we should be looking from a perspective of not just supporting organizational performance via optimal execution, with (good) formal learning and performance support, but also facilitation of continual innovation and development.  I think that L&D can, and must address this, strategically.

So, of course, I think that we still have quite a ways to go in terms of capitalizing on social, the work I’ve been advocating with my ITA colleagues.  They’ve been a boon to my thinking in this space, and they’re driving forward (Charles with the 70:20:10 Forum, Jane with her next edition of the Social Learning Handbook, Harold with Change Agents Worldwide, and Jay continues with the Internet Time Group).  Yet there is still a long ways to go, and lots of opportunity for improvement.

An area that I’m excited about is the instrumentation of what we do to start generating data we can investigate, and analytics to examine what we find.  This is having a bit of a bubble (speaking of cutting through hype with affordances, my take is that “big data” isn’t the answer, big insights are), but the core idea is real.  We need to be measuring what we’re doing against real business needs, and we now have the capability to do it.

And an area I hope we’ll make some inroads on are the opportunities provided by a sort-of ‘content engineering‘ and leveraging that for customized and contextual experiences.  This is valuable for mobile, but does beyond to a much richer opportunity that we have the capability to take advantage of, if we can only muster the will.  I expect this will lag a bit, but doing my best to help raise awareness.

There’s much more, so here’s to making things better in the coming year! I hope to have a chance to talk and work with you about positive changes.  Here’s hoping your new year is a great one!

23 December 2013

Making sense of emerging technologies

Clark @ 6:47 am

Last week I was attending the board meeting for eLearnMag, the Association of Computing Machinery’s ezine on eLearning.  The goal was to bring together the board to discuss new directions.  eLearnMag bridges the academic and practitioner sectors, providing an opportunity for research to inform practice, and vice-versa.

In preparation for the meeting, a survey was taken of the readership, to find what they were looking for.  The top element, by far, was to keep up with emerging technologies.  This makes sense in an era of increasing technology advancement, but it brings with it a worry as well.

Too often, new technologies come out with an abundance of excitement.  Bluntly, there’s a lot of smoke as well as fire. Every new technology is going to be a panacea, particularly for education.  Remember Virtual Worlds?  They were to be the ultimate solution for all learning needs, but instead experienced a crash after a bubble of hype. Now, they’re reemerging with a more reasoned understanding of their core values.

How do we keep from being buried by hype?  We need to understand the core affordances of technologies, the real capabilities brought by technology.  More importantly for our purposes, we need to understand the core learning affordances.  We do this by teasing out the fundamental capabilities, and then matching that to our needs.

For example, I previously took a stab at exploring the affordances of virtual worlds, and similarly for mobile.  The point is to map core capabilities and emerging capabilities, and use those to evaluate technologies for supporting learning and performance.

Going forward, I implore you to try to avoid the hype, and look at the real capabilities.  Look for insight, not bluster.  It’s strategic in making sure technology is used appropriately, and pragmatic to avoid investing in chimeric capabilities.  So, what technologies are you curious about?

Next Page »

Powered by WordPress