Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

13 October 2016

Infrastructure and integration

Clark @ 8:04 am

When I wrote the L&D Revolution book, I created a chart that documented the different stages that L&D could go through on the way.  I look at it again, and I see that I got (at least) one thing slightly off, as I talked about content and it’s more, it’s about integration and infrastructure.   And I reckon I should share my thinking, then and now.

The premise of the chart was that there are stages of maturity across the major categories of areas L&D should be aware of.  The categories were Culture, Formal Learning, Performance Support, eCommunity, Metrics, and Infrastructure. And for each of those, I had two subcategories.  And I mapped each at four stages of maturity.
Let me be clear, these were made up. I stuck to consistency in having two sub areas, and mapping to four stages of maturity.  I don’t think I was wrong, but this was an effort to raise awareness rather than be definitive. That said, I believed then and still now that the chart I created was roughly right.  With one caveat.

prethinkinginfrastructureIn the area of infrastructure, I focused largely on two sub categories, content models and semantics. I’ve been big on the ways that content could be used, from early work I did on content models that led to flexible delivery in an adaptive learning system, a context-sensitive performance support system, and a flexible content publishing system. I’ve subsequently written about content in a variety of places, attended an intelligent content conference, and have been generally advocating it’s time to do content like the big boys (read: web marketers).  And I think these areas are necessary, but not sufficient.

rethinkinginfrastructureI realize, as I review the chart for my upcoming strategy workshop at DevLearn, that I focused too narrowly.  Infrastructure is really about the technical sophistication (which includes content models & semantics, but also tracking and analytics) and integration of elements to create a true ecosystem.   So there’s more to the picture than just the content models and semantics.  Really, we want to be moving on both the sophistication of the model, and the technical underpinnings of the model.

We’ll be discussing this more in Las Vegas in November. And if you’re interested in beginning to offer a richer picture of learning and move  L&D to be a strategic contributor to the organization, this is the chance for a jump-start!


4 October 2016

Site Learnings

Clark @ 8:06 am

So I was talking with a colleague, who pointed out that my site wasn’t as optimized for finding as it could be, and he recommended a solution. Which led to an ongoing series of activities that have some learnings both at the technical and learning side.  So I thought I’d share my learnings about sites.

This being a WordPress site, I use plugins, and my colleague pointed me to a plugin that would guide me through steps to improve my site.  And so I installed it. And it led me through several steps.  One being improving some elements about each post. And some of these had some ramifications.  The steps included:

  • adding a  focus word or phrase
  • adding a meta-description
  • post recommendations for including focus word in the first paragraph
  • adding images
  • and more

I reckon these are good things to be consistent, but while I sometimes include diagrams, I haven’t been rabid about including images.  Which I will probably do more, but not ubiquitously (e.g. this post ;). The other things I’ll work on.  BTW, I am also getting advice on readability, but I’m less likely to change. This is my blog, after all!

One other change was to move from posts by number (e.g. ?p=#), to having a meaningful title. Which is all well and good, but it conflicted with another situation.  See, one of the other recommendations was to be more closely tied to Google’s tools for tracking sites, specifically Search Console.  Which had other ramifications.

So, I’ve put Google tracking code into all of my sites, but the code on Learnlets was old.  I’d put it in, and then my ISP changed the settings on my blog so I couldn’t use the built-in editor to edit the header and footer of the site pages (for security). Which meant I had to find the old code and replace it with FTP. Except, in all the myriad files in a WordPress site, I had no idea where.

Now, I’d try to do this once I’d gotten all my sites tied into Google Analytics, including searching the WP file folders, and browsing a number, to no avail. And I’d searched for guidance, similarly to no avail.  I tried again this time, still to no avail. I even found a recommended plugin that would allow you to add code into the header, but it didn’t work.

Specifically, even though my site was registering in Google Analytics, it wasn’t validated with the Search Console. I tried a number of their recommended steps, like adding a generated .html file into the site and putting a special txt message in my DNS record via my domain name host. (And if you don’t know what this means, it’s not really essential except to note that it’s clearly at the very edge of my deteriorating tech skills. ;)

I finally got on the phone to my ISP, and he gave me the clue I needed to find the right file with the header. Then I could download the file, edit it, and re upload it.  Which is always nervous to me: changing a core and ubiquitous file for your site that could totally stuff things up!

Well, long story short, it worked. I’m now registered with the Search Console, with current  Analytics code. Though, in the process of changing my url style for my blog, it is now generating 404 errors on pages that use the old mechanism (it seemed to work okay on some newer ones, but apparently is falling apart on some older ones).  It’s always something.

So, the important thing: tech stuff ends up being complicated, but what helps are the same innovation (aka informal learning) steps as always. Persistence, a willingness to experiment, a suite of approaches, and a network to fall back on.  And also, if you’re using one of my old URLs, it may be a problem to track down!  This may well be a problem in my own referring sites (e.g. the Quinnovation News page).  Two steps forward, one step back.  Here’s to change!

29 September 2016

Workshopping what’s needed: going deep on elearning

Clark @ 8:04 am

Are you ready to really try to make a change in what you’re doing? It’s past time, both at the level of our elearning design, and at the level of elearning strategy.  And now you have the chance to do something about it, because I’m holding workshops addressing each.  In different places with different goals, but each is a way to proceed on going deep on elearning.

going deep on elearningIf you’re interested in the Revolution, in looking at what L&D can, and should, be, you should join me in Las Vegas at the DevLearn conference and sign up for my pre-conference workshop. On Monday, Nov 14th, we’re going to spend the day getting seriously into opportunities of the performance ecosystem and the strategy to get there.  We’ll look at the need for not only optimal execution but also continual innovation, what is required, and how the elements work together. Then we’re going to work through assessing where you’re at, where you’d like to be, and give you the opportunity to pull together your own strategic plan. You’ll leave with a roadmap forward for your organization.  This is your chance to get a jump on the future of L&D.

If getting serious about elearning design is your thing, you should join us on Wednesday, Nov 30 at Online Educa in Berlin. It’s past time to stop producing elearning that’s ineffective. Here, my workshop  is focused on going deep on elearning.  We’re going to spend the day unpacking the details that make (e)learning really stick, and the design revisions that will accomplish it. We’ll dig into the cognitive, but also the emotional aspects that affect the outcomes.  You’ll practice the skills, and then work on steps that you can practically incorporate into your practice.

If you want to really sink your teeth into either of these important topics, here’s your opportunity.  I hope to see you at one or both!

21 September 2016

Collaborative Modelling in AR (and VR)

Clark @ 8:04 am

A number of years ago, when we were at the height of the hype about Virtual Worlds (computer rendered 3D social worlds, e.g. Second Life), I was thinking about the affordances.  And one that I thought was intriguing was co-creating, in particular collaboratively creating models that were explanatory and predictive.  And in thinking again about Augmented Reality (AR), I realized we had this opportunity again.

Models are hard enough to capture in 2D, particularly if they’re complex.  Having a 3rd dimension can be valuable. Similarly if we’re trying to match how the components are physically structured (think of a model of a refinery, for instance, or a power plant).  Creating it can be challenging, particularly if you’re trying to map out a new understanding.  And, we know that collaboration is more powerful than solo ideation.  So, a real opportunity is to collaborate to create models.

And in the old Virtual Worlds, a number had ways to create 3D objects.  It wasn’t easy, as you had to learn the interface commands to accomplish this task, but the worlds were configurable (e.g. you could build things) and you could build models.  There was also the overall cognitive and processing overhead inherent to the worlds, but these were a given to use the worlds at all.

What I was thinking of, extending my thoughts about AR in general,  that annotating the world is valuable, but how about collaboratively annotating the world?  If we can provide mechanisms (e.g. gestures) for people to not just consume, but create the models ‘in world’ (e.g. while viewing, not offline), we can find some powerful learning opportunities, both formal and informal.  Yes, there are issues in creating and developing abilities with a standard ‘model-building’ language, particularly if it needs to be aligned to the world, but the outcomes could be powerful.

For formal, imagine asking learners to express their understanding. Many years ago, I was working with Kathy Fisher on semantic networks, where she had learners express their understanding of the digestive system and was able to expose misconceptions.  Imagine asking learners to represent their conceptions of causal and other relationships.  They might even collaborate on doing that. They could also just build 3D models not aligned to the world (though that doesn’t necessarily require AR).

And for informal learning, having team or community members working to collaboratively annotate their environment or represent their understanding could solve problems and advance a community’s practices.  Teams could be creating new products, trouble-shooting, or more, with their models.  And communities could be representing their processes and frameworks.

This wouldn’t necessarily have to happen in the real world if the options weren’t aligned to external context, so perhaps VR could be used. At a client event last week, I was given the chance to use a VR headset (Google Cardboard), and immerse myself in the experience. It might not need to be virtual (instead collaboration could be just through networked computers, but there was data from research into virtual reality that suggests better learning outcomes.

Richer technology and research into cognition starts giving us powerful new ways to augment our intelligence and co-create richer futures.  While in some sense this is an extension of existing practices, it’s leveraging core affordances to meet conceptually valuable needs.  That’s my model, what’s yours?

13 September 2016

Augmenting AR for Learning

Clark @ 8:01 am

We’re hearing more and more about AR (Augmented Reality), and one of it’s core elements is layering information on top of the world.  But in a conversation the other night, it occurred to me that we could push that information to be even more proactive in facilitating learning. And this comes from the use of models.

The key idea I want to leverage is the use of models to foster is the use of models to predict or explain what happens in the world. As I have argued, models are useful to guide our performance, and in fact I suggest that they’re the best basis to give people the ability to act, and adapt, in a changing world.  So the ability to develop the ability to use them is, I would suggest, valuable.

Now, with AR, we can annotate the world with models.  We can layer on the conceptual relationships that underpin the things we can observe, so showing flow, causation, forces, constraints, and more.  We can illustrate tectonic forces, represent socio-economic data, physical properties, and so on.  The question is, can we not just illuminate them, but can we ‘exercise’ them. ?

Imagine that when we presented this information, we asked the learner to make an inference based upon the displayed model.  So, for instance, we might ask them, presented with a hypothetical or historical situation to accompany the model, to explain why it would have occurred. Similarly, we could ask them to predict, based upon the model, the outcome of some perturbation.

In short, we’re not only presenting the underlying relationship, but asking them to use it in a particular context.  This is what meaningful practice is all about, and we can use the additional information from the AR overlay as scaffolding to support acquiring not just information but the ability to use it.

Now, motivated and effective self-learners wouldn’t need this additional level of support, but there are plausible situations where it would make sense.  Another extension would be to ask learners to create a particular change of state (as long as the consequences are controllable).  While the addition of information in the world can be helpful, developing that understanding through action could be even more powerful.  That’s where my thinking was going, anyway, where does this lead you?

2 August 2016

Being clear on collaboration

Clark @ 8:01 am

Twice recently, I’ve been confronted with systems that claim to be collaboration platforms. And I think distributed collaboration is one of the most powerful options we have for accelerating our innovation.  So in each case I did some investigation. Unfortunately, the claims didn’t hold up to scrutiny. And I think it’s important to understand why.

Now, true collaboration is powerful.  By collaboration in this sense I mean working together to create a shared representation. It can be a document, spreadsheet, visual, or more. It’s like a shared whiteboard, with technology support to facilitate things like editing, formatting, versioning, and more.  When we can jointly create our shared understanding, we’re developing a richer outcome that we could independently (or by emailing versions of the document around).

However, what was on offer wasn’t this capability.  It’s not new, it’s been the basis of wikis (e.g. Google Docs), but it’s central.  Anything else is, well, something else.  You can write documents, or adjust tables and formulas, or edit diagrams together.  Several people can be making changes in different places at the same time, or annotating their thoughts, and it’s even possible to have voice communication while it’s happening (whether inherently or through additional tools). And it can happen asynchronously as well, with people adding, elaborating, editing whenever they have time, and the information evolves.

So one supported ‘collaborative conversations’.  Um, aren’t conversations inherently collaborative?  I  mean, it takes two people, right?  And while there may be knowledge negotiation, it’s not inherently captured, and in particular it may well be that folks take away different interpretations of what’s been said (I’m sure you’ve seen that happen!).  Without a shared representation, it’s still open to different interpretations (and, yes, we can disagree post-hoc about what a shared representation actually meant, but it’s much more difficult). That’s why we create representations like constitutions and policies and things.

The other one went a wee bit further, and supported annotating shared information. You could comment on it.  And this isn’t bad, but it’s not full collaboration.  Someone has to go away and process the comments.  It’s helpful, but not as much as jointly editing the information in the first place, as well as editing.

I’ve been a fan of wikis since I first heard about them, and think that they’ll be the basis for communities to continue to evolve, as well as being the basis for effective team work. In that sense, they’re core to the Coherent Organization, providing the infrastructure (along with communication and curating) to advance individual and organizational learning.

So, my point is to be clear on what capabilities you really need, so you can suitably evaluate claims about systems to support your actions.  I’ll suggest you want collaborative tools as well as communication tools.  What do you think?

14 June 2016

What’s Your Learning Tool Stack?

Clark @ 8:11 am

I woke up this morning thinking about the tools we use at various levels.  Yeah, my life is exciting ;).  Seriously, this is important, as the tools we use and provide through the organization impact the effectiveness with which people can work. And lately, I’ve been hearing the question about “what’s your <x> stack” [x|x=’design’, ‘development’, …].  What this represents is people talking about the tools they use to do their jobs, and I reckon it’s important for us to talk about tools for learning.  You can see the results of Jane Hart’s annual survey, but I’m carving it up into a finer granularity, because I think it changes depending on the ‘level’ at which you’re working, ala the Coherent Organization.  So, of course, I created a diagram.

Learning stack: personal, team, community, organizationWhat we’re talking about here, starting at the bottom, are the tools you personally use for learning. Or, of course, the ones others use in your org. So this is how you represent your own understandings, and manipulate information, for your own purposes.  For many people in organizations, this is likely to include the MS Office Suite, e.g. Word, PowerPoint, and Excel. Maybe OneNote?  For me, it’s Word for writing, OmniGraffle for diagramming (as this one was created in), WordPress for this blog (my thinking out loud; it is for me, at least in the first instance), and a suite of note taking software (depending on type of notes) and personal productivity.

From there, we talk about team tools. These are to manage communication and information sharing between teams.  This can be email, but increasingly we’re seeing dedicated shared tools being supported, like Slack, that support creating groups, and archive discussions and files.  Collaborative documents are a really valuable tool here so you’re not sending around email (though I’m doing that with one team right now, but it’s only back forth, not coordinating between multiple people, at least on my end!). Instead, I coordinate with one group with Slack, a couple others with Skype and email, and am using Google Docs and email with another.

From there we move up to the community level. Here the need is to develop, refine, and share best principles. So the need is for tools that support shared representations.  Communities are large, so we need to start having subgroups, and profiles become important. The organization’s ESN may support this, though (and probably unfortunately) many business units have their own tools. And we should be connecting with colleagues in other organizations, so we might be using society-provided platforms or leverage LinkedIn groups.  There’s also probably a need to save community-specific resources like documents and job aids, so there may be a portal function as well. Certainly ongoing discussions are supported.  Personally, without my own org, I tap into external communities using tools like LinkedIn groups (there’s one for the L&D Revolution, BTW!), and Facebook (mostly friends, but some from our own field).

Finally, we get to the org level. Here we (should) see organization wide Enterprise Social Networks like Jive and Yammer, etc. Also enterprise wide portal tools like Sharepoint.  Personally, I work with colleagues using Socialcast in one instance, and Skype with another (tho’ Skype really isn’t a full solution).

So, this is a preliminary cut to show my thinking at inception.  What have I forgotten?  What’s your learning stack?

10 May 2016

Two separate systems?

Clark @ 8:05 am

I frequently say that L&D needs to move from just ensuring optimal execution to also supporting continual innovation.  Can these co-exist, or are they fundamentally different?  I really don’t know, but it’s worth pondering.

Kotter (the change management guru), has begun to advocate for a dual-operating system approach, where companies jointly support an operational hierarchy and an innovation network that are coupled.  I haven’t read his book on the topic, but it seems to be a bit extrinsic, a way of bolting on innovation instead of making it intrinsic to the operation.

On the other hand, there is quite a bit of expression for more flexible systems, a more podular approach. Teaming or small nodes are increasingly appearing as not just for innovation, but ongoing operation. However, it’s not clear how the various different areas are coordinated, so how marketing across pods maintains coherent.

CoherentOrgExpandedThis is what led to our Coherent Organization model.  The notion is that the teams are coming in from, and reporting back up through, their communities. And their communities are communicating both within, and outside of, the organization.

It’s not clear to me whether the team approach can scale to a global organization, or whether you need the hybrid model.  I can see that the hybrid model would appeal to existing business folks who would be concerned about optimization in execution.  I can see that the new model would at least require fundamental changes in mechanisms, and perhaps a willingness to tradeoff absolute perfection in execution to maintain continuing innovation and customer-responsiveness.

While intuitively the more biologically inspired approach sounds like the longer-term solution, it’s non-trivial in terms of creating cultures that are appropriately conducive.  I think that organizational operations may be at an inflection point, and there does seem to be data that supports more radical flexibility.   I think a performance ecosystem coupled with a learning organization environment is likely going to be the way to move.  How you get there is part of the revolution that’s needed. Start small, scale out, etc. And I hope L&D can help lead the way.

4 May 2016

Learning in Context

Clark @ 8:09 am

In a recent guest post, I wrote about the importance of context in learning. And for a featured session at the upcoming FocusOn Learning event, I’ll be talking about performance support in context.  But there was a recent question about how you’d do it in a particular environment, and that got me thinking about the the necessary requirements.

As context (ahem), there are already context-sensitive systems. I helped lead the design of one where a complex device was instrumented and consequently there were many indicators about the current status of the device. This trend is increasing.  And there are tools to build context-sensitive helps systems around enterprise software, whether purchased or home-grown. And there are also context-sensitive systems that track your location on mobile and allow you to use that to trigger a variety of actions.

Now, to be clear, these are already in use for performance support, but how do we take advantage of them for learning. Moreover, can we go beyond ‘location’ specific learning?  I think we can, if we rethink.

So first, we obviously can use those same systems to deliver specific learning. We can have a rich model of learning around a system, so a detailed competency map, and then with a rich profile of the learner we can know what they know and don’t, and then when they’re at a point where there’s a gap between their knowledge and the desired, we can trigger some additional information. It’s in context, at a ‘teachable moment’, so it doesn’t necessarily have to be assessed.

This would be on top of performance support, typically, as they’re still learning so we don’t want to risk a mistake. Or we could have a little chance to try it out and get it wrong that doesn’t actually get executed, and then give them feedback and the right answer to perform.  We’d have to be clear, however, about why learning is needed in addition to the right answer: is this something that really needs to be learned?

I want to go a wee bit further, though; can we build it around what the learner is doing?  How could we know?  Besides increasingly complex sensor logic, we can use when they are.  What’s on their calendar?  If it’s tagged appropriately, we can know at least what they’re supposed to be doing.  And we can develop not only specific system skills, but more general business skills: negotiation, running meetings, problem-solving/trouble-shooting, design, and more.

The point is that our learners are in contexts all the time.  Rather than take them away to learn, can we develop learning that wraps around what they’re doing? Increasingly we can, and in richer and richer ways. We can tap into the situational motivation to accomplish the task in the moment, and the existing parameters, to make ordinary tasks into learning opportunities. And that more ubiquitous, continuous development is more naturally matched to how we learn.

3 May 2016

Showing my age, er, experience

Clark @ 8:05 am

I’ve been reading What the Dormouse Said (How the Sixties Counterculture Shaped the Personal Computer Industry), and it’s bringing back some memories.  Ok, so most of this stuff is older than I am, but there are a few connections, so it’s reminiscing time.  I’ve said some of this before, I believe, so feel free to wander on.  This is me just thinking aloud.

I was taking some computer science classes because I’d found out that biology was rote memorization and cut-throat medical (which I did not want to do; I was hoping for marine bio), and a buddy was doing it.  Given that I was at UCSD at the time, I naturally learned UCSD Pascal (as well as Fortran, which I fortunately forgot almost immediately, and Mixal likewise). I enjoyed algorithms, however, and could solve problems. I also was enchanted with AI (despite my first prof).  And I was  tutoring for some extra pocket money, math and science (even classes I hadn’t taken yet!).

Then I got a job doing the computer support for the office that did the tutoring (literally carrying decks of cards in Algol to run through the computer center). And a light went off; computers for learning!  There was no major then at my school, but there was a program to design my own major, and I found a couple of professors willing to serve as my advisors (thank you, Hugh Mehan and Jim Levin). They even let me work on a project with them (email for classroom discussion, circa 1978; we had ARPANET, the predecessor to the internet).  It eventually even got published as a journal article.

I called all over the country, trying to find someone who needed a person interested in computer learning.  I even interviewed at Xerox PARC with John Seely Brown, courtesy of Tom Malone (I didn’t get the job; they wanted something I’d done but I didn’t know their term for it!).  After a small job doing some statistical work for a research project, I managed to get a job designing and programming educational computer games for DesignWare (you can still play some of  the products here, the magic of  the internet).  We went from Basic to Forth (for speed and small size), though I later moved away from coding with the demise of HyperCard ;).

And the main connection to the cool stuff, besides the interview at PARC, was visiting the West Coast Computer Faire.  It was cool in and of itself, but there I met David Suess, who along with Bill Bowman was starting Spinnaker, a company to do home educational software.  DesignWare had been doing games to go along with publisher offerings, and I was pushing the home market.  After a conversation, I introduced David to my boss Jim Schuyler (Sky) and off we went. As a reward, I got to do FaceMaker. Eventually, DesignWare started doing it’s own titles, and I also did Spellicopter and Creature Creator before I realized I wanted to go back to grad school.

Along the way I also read Byte magazine and tracked efforts like SmallTalk and folks like Alan Kay.  I’ve subsequently had the pleasure to meet him, as well as Doug Engelbart and Ted Nelson, so I’ve somewhat closed the loop on those heady days.  There’s much more between then and now, but that’s enough for one post. And most of my counterculture experiences were behind me by that time, so I didn’t really get a chance to see those connections, but it was an exciting time, and a great exposure to the possibilities.

Next Page »

Powered by WordPress