At the mLearnCon conference, it became clear it was time to write about wearables. At the same time, David Kelly (program director for t he Guild) asked for conference reflections for the Guild Blog. Long story short, my reflections are a guest post there.
Karen McGrane #mLearnCon Keynote Mindmap
Karen McGrane evangelized good content architecture (a topic near to my heart), in a witty and clear keynote. With amusing examples and quotes, she brought out just how key it is to move beyond hard wired, designed content and start working on rule-driven combinations from structured chunks. Great stuff!
Neil Jacobstein #ChurchillClub Keynote Mindmap
Can we jumpstart new tech usage?
It’s a well-known phenomena that new technologies get used in the same ways as old technologies until their new capabilities emerge. And this is understandable, if a little disappointing. The question is, can we do better? I’d certainly like to believe so! And a conversation on twitter led me to try to make the case.
So, to start with, you have to understand the concept of affordances, at least at a simple level. The notion is that objects in the world support certain action owing to the innate characteristics of the object (flat horizontal surfaces support placing things on them, levers afford pushing and pulling, etc). Similarly, interface objects can imply their capabilities (buttons for clicking, sliders for sliding). They can be conveyed by visual similarity to familiar real-world objects, or be completely new (e.g. a cursor).
One of the important concepts is whether the affordance is ‘hidden’ or not. So, for instance, on iOS you can have meaningful differences between one, two, three, and even four-fingered swipes. Unless someone tells you about it, however, or you discover it randomly (unlikely), you’re not likely to know it. And there’re now so many that they’re hard to remember. There are many deep arguments about affordances, and they’re likely important but they can seem like ‘angels dancing on the head of a pin’ arguments, so I’ll leave it at this.
The point here being that technologies have affordances. So, for example, email allows you to transmit text communications asynchronously to a set group of recipients. And the question is, can we anticipate and leverage the properties and skip (or minimize) the stumbling beginnings.
Let me use an example. Remember the Virtual Worlds bubble? Around 2003, immersive learning environments were emerging (one of my former bosses went to work for a company). And around 2006-2009 they were quite the coming thing, and there was a lot of excitement that they were going to be the solution. Everyone would be using them to conduct business, and folks would work from desktops connecting to everyone else. Let me ask: where are they now?
The Gartner Hype Cycle talks about the ‘Peak of Inflated Expectations’ and then the ‘Trough of Disillusionment’, followed by the ‘Slope of Enlightenment’ until you reach the ‘Plateau of Productivity’ (such vibrant language!). And what I want to suggest is that the slope up is where we realize the real meaningful affordances that the technology provides.
So I tried to document the affordances and figure out what the core capabilities were. It seemed that Virtual Worlds really supported two main points: being inherently 3D and being social. Which are important components, no argument. On the other hand, they had two types of overhead, the cognitive load of learning them, and the technological load of supporting them. Which means that their natural niche would be where 3D would be inherently valuable (e.g. spatial models or settings, such as refineries where you wanted track flows), and where social would also be critical (e.g. mentoring). Otherwise there were lower-cost ways to do either one alone.
Thus, my prediction would be that those would be the types of applications that’d be seen after the bubble burst and we’d traversed the trough. And, as far as I know, I got it right. Similarly, with mobile, I tried to find the core opportunities. And this led to the models in the Designing mLearning book.
Of course, there’s a catch. I note that my understanding of the capabilities of tablets has evolved, for instance. Heck, if I could accurately predict all the capabilities and uses of a technology, I would be running venture capital. That said, I think that I can, and more importantly, we can, make a good initial stab. Sure, we’ll miss some things (I’m not sure I could’ve predicted the boon that Twitter has become), but I think we can do better than we have. That’s my claim, and I’m sticking to it (until proved wrong, at least ;).
It’s (almost) out!
My latest tome, Revolutionize Learning & Development: Performance and Innovation Strategy for the Information Age is out. Well, sort of. What I mean is that it’s now available on Amazon for pre-order. Actually, it’s been for a while, but I wanted to wait until there was some there there, and now there’s the ‘look inside’ stuff so you can see the cover, back cover (with endorsements!), table of contents, sample pages, and more. Ok, so I’m excited!
What I’ve tried to do is make the case for dragging L&D into the 21st Century, and then provide an onramp. As I’ve been saying, my short take is that L&D isn’t doing what it could and should be doing, and what it is doing, it is doing badly. But I don’t believe complaining alone is particularly helpful, so I’m trying to put in place what I think will help as well. The major components are:
- what’s wrong (you can’t change until you admit the problem :)
- what we know about how we think, work, and learn that we aren’t accounting for
- what it would look like if we were doing it right
- ways forward
By itself, it’s not the whole answer, for several reasons. First, it can’t be. I can’t know all the different situations you face, so I can’t have a roadmap forward for everyone. Instead, what I supposed you could think of is that it’s a guidebook (stretching metaphors), showing suggestions that you’ll have to sequence into your own path. Second, we don’t know all yet. We’re still exploring many of these areas. For example, culture change is not a recipe, it’s a process. Third, I’m not sure any one person can know all the answers in such a big field. So, fourth, to practice what I’m preaching, there should be a community pushing this, creating the answers together.
A couple of things on that last part, the first one is a request. The community will need to be in place by the time the book is shipping. The question is where to host it. I don’t intend to build a separate community for it on the book site, as there are plenty of places to do this. Google groups, Yahoo groups, LinkedIn…the list goes on. It can’t be proprietary (e.g. you have to be a paid member to play). Ideally it’d have collaborative tools to create resources, but I reckon that can be accommodated via links. What do you folks think would be a good choice?
The second part of the community bit is that I’m very grateful to many people who’ve helped or contributed. Practitioner friends and colleagues provided the five case studies I’ve the pleasure to host. Two pioneers shared their thoughts. The folks at ASTD have been great collaborators in both helping me with resources, and in helping me get the message out. A number of other friends and colleagues took the time to read an early version and write endorsements. And I’ve learned together with so many of you by attending events together, hearing you speak, reading your writings, and having you provide feedback on my thoughts via talking or writing to me after hearing me speak or commenting on my scribblings here.
The book isn’t perfect, because I have thought of a number of ways it could be improved since I provided the manuscript, but I have stuck to the mantra that at some point it’s better out than still being polished. This book came from frustration that we can be doing so much better, and we’re not. I didn’t grow up thinking “I’m going to be a revolutionary”, but I can’t not see what I see and not say something. We can be doing so much better than we are. And so I had to be willing to just get the word out, imperfect. It wasn’t (isn’t) clear that I’m the best person to call this out, but someone needs to!
That said, I have worked really hard to have the right pieces in place. I’ve collected and integrated what I think are the necessary frameworks, provided case studies and a workplace scenario, and some tools to work forward. I have done my best to provide a short and cogent kickstart to moving forward.
Just to let you know that I’m starting my push. I’ll be presenting on the book at ASTD’s ICE conference, and doing some webinars. Bryan Austin of GameOn Learning interviewed me on my thoughts in this direction. I do believe in the message, and that it at least needs to be heard. I think it’s really the necessary message for L&D (in it, you’ll find out why I’m suggesting we need to shift to P&D!). Forewarned! I look forward to your feedback.
Smarts: content or system?
I wrote up my visit to the Intelligent Content conference for eLearnMag, but one topic I didn’t raise was an unanswered question I raised during the conference: should the ‘smarts’ be in the content or the system? Which is the best way to adapt?
Now the obvious answer is the system. Making content smart would require a bunch of additional elements to the content. There would have to be logic to sense conditions and make changes. Simple adaptation could be built in, but it would be hard to revise them if you had new information. Having well-defined content and letting the system use contextual information to choose the content is the typical system used in the industry.
Let’s consider the alternative for a minute, however. If the content were adaptive, it wouldn’t matter what system it was running on, it would deliver the same capability. For example you could run under SCORM and still have the smart behavior. And you can’t adapt with a system if you’ve monolithic learning objects that contain the whole experience.
And, at the time I led a team building an adaptive learning engine, we did see adaptive content. However, we chose to have more finely granulated content, down to individual practice items, separate examples, concepts, and more. Even our introductions were going to have separate elements. We believed that if we had finely articulated content models, and rich tagging, we could change the rules that were running in the system, and get new adaptive behaviors across all the content with only requiring new rules in one place.
And if new tags were needed on the content objects, we could write programs to add necessary tags rather than have to hand-address every object. In the smart content approach, if you want to change the adaptation, you’re getting into the internals of every content piece.
We thought we had it right, and I still think that, for the reasons above, smart systems are the way to go, coupled with semantically tagged and well-delineated content. Happy to hear alternate proposals!
Smarter Than We Think Review
In Smarter Than We Think, Clive Thompson makes the case that not only is our technology not making us stupider, but that we have been using external support for our cognition from our earliest days. Moreover, this is a good thing. Well, if we do so consciously and with good intent.
He starts by telling the story of how – as our chess competitions have moved from man against man, through man against computer, to man & computer against man & computer – the quality of play has fundamentally changed and improved. He ultimately recites how the outcomes of the combination of man and machine produce fundamentally new insights.
He goes on to cover a wide variety of phenomena. These include augmenting our imperfect memory, the benefits of thinking out loud, the gains from understanding different media properties, the changes when information is to hand, the opportunities unleashed by crowd-sourcing, the implications for education, and the changes when you have continual connection to others. This is not presented as an unvarnished panacea, but the potential and real problems are covered.
The story is ripely illustrated with many stories culled from many interviews with people well-known and obscure, but all with important perspectives. We hear of impacts both personal, national, and societal. This is a relatively new book, and while we don’t hear of Edward Snowden or Bradley Manning, their shadows fall on the material. On the other hand we hear of triumphs for individuals and movements.
I have argued before about how we can, and should, augment our pattern-matching capability with the perfect memory and complex calculation that digital technology provides, and separately how social extends our cognition. Thompson takes this further, integrating the two, extending the story to media and networked capabilities. A good extension and a worthwhile read.
Starting trouble
This seems to be my year of making trouble, and one of the ways is talking about what L&D is and isn’t doing. As a consequence of the forthcoming book (no cover comps yet nor ability to preorder), I’ve had to put my thoughts together, and I’m giving the preliminary version next Thurs, February 6, at 11AM PT, 2PM ET as a webinar for ASTD.
The gist is that there are a number of changes L&D is not accommodating: changes in how business should be run, changes in understanding how we think and perform, and even our understanding of learning has advanced (at least beyond the point that most of our corporate approaches seem to recognize). Most L&D really seems stuck in the industrial age, and yet we’re in the information age.
And this just doesn’t make sense! We should be the most eager adopters of technology, staying on top of new developments and looking for their potential to support our organizations. We should be leading the charge in being learning organizations: following the business precepts of experimenting regularly, failing fast, and reflecting on the outcomes. Yet that doesn’t reflect what the we’re seeing.
To move forward, we need to do more. To address business needs, we need to consider performance support and social networks. In fact, I argue that these should be our first line of defense, and courses should only be used when a significant skill shift is required. We should be leveraging technology more effectively, looking at semantics and content architectures as well as mobile and contextual opportunities. And we need to be getting strategic about how we’re helping the organization and evaluating not just efficiency but our effectiveness and impact.
This is just the start of a rolling series of activities trying to inject a sense of urgency into L&D (change management step 1). While this will be covered in print, in sessions starting with last week’s TK14, and continuing through Learning Solutions and ICE, here’s a chance to get a headstart. Look for a followup somewhere around April. Hope you’ll join us!
Kate Hartman #ASTDTK14 Keynote Mindmap
Mac memories
This year is the 30th anniversary of the Macintosh, and my newspaper asked for memories. I’ll point them to this post ;).
As context, I was programming for the educational computer game company, DesignWare. DesignWare had started out doing computer games to accompany K12 textbooks, but I (not alone) had been arguing about heading into the home market, and happened to run into Bill Bowman and David Seuss at a computer conference, who’d started Spinnaker to sell education software to the home market, and were looking for companies that could develop product. I told them to contact my CEO, and as a reward I got to do the first joint title, FaceMaker. When DesignWare created it’s own titles, I got to do Creature Creator and Spellicopter before I headed off to graduate school for my Ph.D. in what ended up being, effectively, applied cognitive science.
While I was at DesignWare, I had been an groupie of Artificial Intelligence and a nerd around all things cool in computers, so I was a fan of the work going on at Xerox Palo Alto Research Center (aka Parc), and followed along in Byte magazine. (I confess that, at the time, I was a bit young to have been aware of the mother of all demos by Doug Engelbart and the inspiration of the Parc work.) So I lusted after bitmap screens and mice, and the Lisa (the Mac predecessor).
My Ph.D. advisor, Donald Norman, had written about cognitive engineering and the research lab I joined was very keen on interface design (leading to Don’s first mass-market and must-read book, The Psychology of Everyday Things, subsequently titled The Design of Everyday Things, and a compendium of writings call User-Centered System Design). He was, naturally, advising Apple. So while I dabbled in meta-learning, I was right there at the heart of thinking around interface design.
Naturally, if you cared about interface design, had designed engaging graphic interfaces, and had watched how badly the IBM PC botched the introduction of the work computer, you really wanted the Macintosh. Command lines were for those who didn’t know better. When the Macintosh first came out, however, I couldn’t justify the cost. I had access to Unix machines and the power of the ARPANET. (The reason I was originally ho-hum about the internet was that I’d been playing with Gopher and WAIS and USENET for years!)
I finally justified the purchase of a Mac II to write my PhD thesis on. I used Microsoft Word, and with the styles option was able to meet the rigorous requirements of the library for theses without having to pay someone to type it for me (a major victory in the small battles of academia!). I’ve been on a Macintosh ever since, and have survived the glories of iMacs and Duos (and the less-than stellar Performa). And I’ve written books, created presentations, and brainstormed through diagrams in ways I just haven’t been able to on other platforms. My family is now also on Macs. When the alternative can be couched as the triumph of marketing over matter, there really has been little other choice. Happy 30th!