Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

24 February 2016

When to gamify?

Clark @ 8:10 am

I’ve had lurking in my ‘to do’ list a comment about doing a post on when to gamify. In general, of course, I avoid it, but I have to acknowledge there are times when it makes sense.  And someone challenged me to think about what those circumstances are. So here I’m taking a principled shot at it, but I also welcome your thoughts.

To be clear, let me first define what gamification is to me.  So, I’m a big fan of serious games, that is when you wrap meaningful decisions into contexts that are intrinsically meaningful.  And I can be convinced that there are times when tarting up memory practice with quiz-show window-dressing makes sense, e.g. when it has to be ‘in the head’.  What I typically refer to as gamification, however, is where you use external resources, such as scores, leaderboards, badges, and rewards to support behavior you want to happen.

I happened to hear a gamification expert talk, and he pointed out some rules about what he termed ‘goal science’.  He had five pillars:

  1. that clear goals makes people feel connected and aligns the organization
  2. that working on goals together (in a competitive sense ;) makes them feel supported
  3. that feedback helps people progress in systematic ways
  4. that the tight loop of feedback is more personalized
  5. that choosing challenging goals engages people

Implicit in this is that you do good goal setting and rewards. You have to have some good alignment to get these points across.  He made the point that doing it badly could be worse than not doing it at all!

With these ground rules, we can think about when it might make sense.  I’ll argue that one obvious, and probably sad case, would be when you don’t have a coherent organization, and people aren’t aware of their role in the organization.  Making up for effective communication isn’t necessarily a good thing, in my mind.

I think it also might make sense for a fun diversion to achieve a short-term goal. This might be particularly useful for an organizational change, when extra motivation could be of assistance in supporting new behaviors. (Say, for moving to a coherent organization. ;) Or some periodic event, supporting say a philanthropic commitment related to the organization.

And it can be a reward for a desired behavior, such as my frequent flier points.  I collect them, hoping to spend them. I resent it, a bit, because it’s never as good as is promised, which is a worry.  Which means it’s not being done well.

On the other hand, I can’t see using it on an ongoing basis, as it seems it would undermine the intrinsic motivation of doing meaningful work.  Making up for a lack of meaningful work would be a bad thing, too.

So, I recall talking to a guy many moons ago who was an expert in motivation for the workplace. And I had the opportunity to see the staggering amount of stuff available to orgs to reward behavior (largely sales) at an exhibit happening next to our event. It’s clear I’m not an expert, but while I’ll stick to my guns about preferring intrinsic motivation, I’m quite willing to believe that there are times it works, including on me.

Ok, those are my thoughts, what’ve I missed?

23 February 2016

The magic question

Clark @ 8:07 am

A number of years ago, I wrote a paper about design, some of the barriers our cognitive architecture provides, and some heuristics I used to get around them.  I wrote a summary of the paper as four posts, starting here.  I was reminded of one of the heuristics in a conversation, and had a slightly deeper realization that of course I wanted to share.

The approach, which I then called ‘no-limits’ design, has to do with looking at what solution you’d develop if you had no limits. I now think of it as the ‘magic’ approach.  As I mentioned in the post series, this approach asks what you’d design if you had magic (and referred to the famous Arthur C. Clarke quote). And while I indicated one in the past, I think there are two benefits to this approach.

First, if you consider what you’d do if you have magic, you can help prevent a common problem, premature convergence. Our cognitive architecture has weaknesses, and a couple of them revolve around solving problems in known ways and using tools in familiar ways.  It’s too easy to subconsciously rule out new options.  By asking the ‘magic’ question, we ask ourselves to step outside what we’ve known and believe is possible, and consider the options we’d have if we didn’t have the technological limitations.

Similarly, using the notion of ‘magic’ can help us explore other models for accomplishing the goal. If design is not just evolutionary, but you also want to explore the opportunities to revolutionize, you need some way to spark new thinking.  The ability to remove the limitations and explore the core goals facilitates that.

Using this at the wrong time, however, could be problematic. You may have already constrained your thinking too far.  If you consider the design process to be a clear identification of the problem (including the type of design thinking analysis that includes ethnographic approaches) before looking for solutions, and then considering a wide variety of input data about solutions including other approaches already tried, you’d want this to come after the problem identification but before any other solutions to explore.

Pragmatically, per my previous post, you want to think about your design processes from a point of view of leverage. Having worked through several efforts to improve design with partners and clients, there are clear leverage points that give you the maximum impact on the quality of the learning outcome (e.g. how ‘serious‘ your solution is) for the minimal impact. There are many more small steps that can be integrated that will improve your outcomes, so it helps to look at the process and consider improvement opportunities.  So, are you ready to ask the ‘magic’ question?

17 February 2016

Beyond Consulting

Clark @ 8:11 am

I was at a retreat this weekend, consorting with colleagues. And one of the persistent perceptions of me came up that I was reflecting on, and thought I’d do so ‘out loud’.

So, once consulting became my way of life, I realized I needed to get better at the bits I don’t know.  I’ve got deep theory, and considerable practical experience, but I never was a ‘businessperson’. That is, I didn’t have sales experience, or marketing knowledge, and deal-making. As part of the solution, I found Robert Middleton, who is basically a consultant to help other consultants market themselves.  And I paid attention to his recommendations.

One of the  interesting things was a model he had that pitted your depth of information against your ability to implement.  A high information person was an expert, a high implementation person was a hard worker, (low on both was a salesperson ;), and he suggested your goal was to position yourself as an ‘infoguru’, high on both.

ConsultantI was reminded of it, and realize I see it slightly differently. So I’d put someone high on the theory/information side as an academic or researcher, whether they’re in an institution or not.  They know the theories behind the outcomes, and may study them, but don’t apply them . And I’d put someone who can execute against a particular model as a contractor. You know what you want done, and you hire someone to do it.

Then I see consultants in general as those who go beyond contracting to doing analysis up front, and sorting between models to determine which are relevant, and then assist the client to act. And I definitely lump myself in that category, having a very large set of models I draw upon, and lots of experience applying them or developing new ones to help meet real needs.  Creative, yet practical solutions. Reliably.

However…

I was chatting with some colleagues, and their feedback was that while I was perceived highly on the idea side, I didn’t position myself on the expertise side as well.  And it’s true that I talk ideas, models largely, because they are the frameworks that give you traction to solve problems. And, of course, more solutions will occur if people have models to use.  I naively believed that showing I knew the models would help assure people that I can assist.  And so, while I develop new and/or useful ones to help frame things so they can be solved, perhaps I don’t help make clear enough that I also work with people to figure out what models make sense to take them forward.

I don’t talk enough about the projects I’ve worked on, nor the results I’ve had. Sometimes it’s because of confidentiality (you don’t want me talking about your secrets, either). And it can be hard to talk quantitatively, because too seldom does L&D actually measure what they do, but I’ve helped folks with game design, mobile strategy, learning process improvement, L&D revolution, and more.  Heck, I had a really good track record while I was an academic for doing interesting projects developing learning and performance solutions. And so far, I’ve been feeding the family for 15 years now as a consultant, so I reckon I’m doing something right ;).

Still, I’ll try to do a bit better in linking the two, because I don’t want you to think what I natter on here isn’t directly applicable to improving what you do. I’ve revamped my website a bit, talking about helping at the learning design or strategy level (feedback appreciated).

You’re welcome to attempt improvements on your own, but if you want assistance in making the move faster with fewer hiccups along the way, I have been assisting folks for a long time now, and would welcome doing it with you. Whether it’s vendors finally wanting to address better learning design, or organizations looking to go beyond the ordinary, I’m here to help. This is what I do!  I find it really rewarding to work with folks and collaboratively generating great outcomes, and I’m looking for more opportunities to assist. So a question for you: am I missing something in helping folks see how I can help them?

16 February 2016

Litmos Guest Blog Series

Clark @ 8:09 am

As I did with Learnnovators, with Litmos I’ve also done a series of posts, in this case a year’s worth.  Unlike the other series, which was focused on deeper eLearning design, they’re not linked thematically and instead cover a wide range of topics that were mutually agreed as being personally interesting and of interest to their argument.

So, we have presentations on:

  1. Blending learning
  2. Performance Support
  3. mLearning: Part 1 and Part 2
  4. Advanced Instructional Design
  5. Games and Gamification
  6. Courses in the Ecosystem
  7. L&D and the Bigger Picture
  8. Measurement
  9. Reviewing Design Processes
  10. New Learning Technologies
  11. Collaboration
  12. Meta-Learning

If any of these topics are of interest, I welcome you to check them out.

 

10 February 2016

Badass

Clark @ 8:23 am

That’s the actual title of a book, not me being a bit irreverent.  I’ve been a fan of Kathy Sierra’s since I came across her work, e.g. I regularly refer to how she expresses ‘incrementalism‘. She’s on top of usability and learning in very important ways. And she’s got a new book out that I was pleased to read: Badass: Making Users Awesome.  So why do I like it?  Because it elegantly intermixes both learning and usability to talk about how to do design right (which I care about; I used to teach interface design besides my focus on learning design), but more importantly that the lessons invoked also apply to learning.

So what’s she doing differently?  She’s taking product design beyond marketing and beyond customer desires.  The premise of the book is that it’s not about the user and not about the product, it’s about the two together making the user more capable in ways they care about. Your audience should be saying “Look at what I can do” because of the product, not “I love this product”. This, she argues cogently, is valuable; it trumps just branding, and instead building customer loyalty as an intrinsic outcome of the experience they have.

The argument starts with making the case that it’s about what user goals are, and then figuring out how to get there in ways that systematically develop users’ capability while managing their expectations. Along the way, she talks about being clear on what will occur, and giving them small wins along the way.  And she nicely lays out learning science and motivation research as practical implications.

While she’s more focused on developing complex products with interfaces that remove barriers like cognitive load, and provide incremental capability, this applies to learning as well. We want to get learners to new capabilities in steps that maintain motivation and prevent drop-off. She gets into issues like intermediate skills and how to develop them in ways that optimize outcomes, which is directly relevant to learning design. She cites a wide variety of people in her acknowledgements, include Julie Dirksen and Jane Bozarth in our space, so you know she’s tracking the right folks.

It’s an easy read, too. It’s unusual, paperback but on weighty paper supporting her colorful graphics that illustrate her every point.  There’s at least an equal balance of prose and images if not more on the latter side.  While not focused specifically on learning design, it includes a lot of that but also covers performance support and more in an integrated format that resonates with an overall perspective on a performance ecosystem.

While perhaps not as fundamental as Don Norman’s Design of Everyday Things (which she references; and everyone who designs for anyone else needs to read), it’s a valuable addition to those who want to help people achieve their goals, and that includes product designers, interface designers, and learning experience designers.  If you’re designing a solution for others, whether a mobile app, an authoring tool, a LMS, or other, you do need this. If you’re designing learning, you probably need this. And if you’re designing learning as a business (e.g. designing learning for commercial consumption), I highly recommend giving this a read.

9 February 2016

Social Training?

Clark @ 8:16 am

Sparked by the sight of a post about ‘social training’, I jokingly asked my ITA colleagues whether they could train me to be social.  And, of course, they’ve posted about it.  And it made me think a little bit more too.

Jane talks about being asked “how you make people learn socially”, and mentions that you can’t force people to be social.  That’s the point, you can’t make people engage.  Particularly if it’s not safe to share. She goes on and says it’s got to be “relevant, purposeful and appealing”, and what you do is provide the environment and conditions.

Harold riffs off of Jane’s post, and points out that shifting an organization to a more social way of working takes management’s commitment and work from both above and below.  He lists a number of activities he’s engaged in to try to develop success in several initiatives.  His point being that it’s not just org change, you need to adopt a new mindset about responsibility and work towards an effective culture.

I’ve talked in the past about the environmental elements and the skills required.  There are multiple areas that can be addressed, but it’s not to make people learn socially.  You need the right culture, the technology infrastructure, meaningful work, and the skills.  And these aren’t independent, but intrinsically interlinked.

You likely need to start small, working outward. You need to start with meaningful work, make sure that it’s safe to work together, develop the ability to use social tools to accomplish the work, and develop the skills about working together. Don’t take those for granted!  Then, you can lather-rinse-repeat (don’t get me started on the impact of that last word), spreading both to other work projects and up to community.

You’ll want to be strategic about the choice of tools, and the message. It’s not about the tools, and there are replacements for every tool, it’s about the functions they serve.  While you want to use the software already in play, you also want to not lock their abilities to one suite of tools in case you want to switch.

And, of course, you need to facilitate the interactions as well. Help people ask for help, and to offer help, and about how to provide feedback, and…

As well, you need to manage the messaging around it.  Help people see the upsides, help support the transition (both with plans to address the expected problems and a team ready to work on any unexpected ones), etc.  It is organizational change, but it’s also culture change.  It takes a plan to scale up.

So, joking aside, it’s not about social training (though learning can be social), but instead about creating a learning organization that brings out the best outcomes from and for the employees. As another discussion posited, you don’t get the best customer experience unless you have a good employee experience.  So, are you creating the best?

5 February 2016

Leverage points for organizational agility

Clark @ 8:19 am

I received some feedback on my post on Organizational Knowledge Mastery.  The claim was that if you trusted to human sensing, you’d be only able to track what’s become common knowledge, and that doesn’t provide the necessary competitive advantage. The extension was that you needed market analytics to see new trends. And it caused me to think a little deeper.

I’m thinking that the individuals in the organization, in their sensing/sharing, are tracking things before it becomes common knowledge. If people are actively practicing  ongoing sensemaking and sharing internally and finding resonance, that can develop understanding before it becomes common knowledge.  They’ve expertise in the area, and so that shared sense making should precede what emerges as common knowledge.  Another way to think about it is to ask where the knowledge comes from that ​becomes the common knowledge?

And I’m thinking that market analytics aren’t going to find the new, because by definition no one knows that to look for yet.  Or at least part of the new.  To put it another way, the qualitative (e.g. semantic) changes aren’t going to be as visible to machine sensing as to human (Watson notwithstanding).  The emerging reality is human-machine hybrids are more powerful than either alone, but each alone finds different things.  So there were things in protein-folding that machines found, but other things that humans playing protein-folding games found.   I have no problem with market data also, but I definitely think that the organization benefits to the extent that it supports human sense-making as well.  Different insights from different mechanisms.

And I also think a culture for agility comes from a different ‘space’ than does a rabid focus on numerics.  A mindset that accommodates both is needed.  I don’t think they’re incommensurate.  I’m kind of suspicious of dual operating systems versus a podular approach, as I suspect that the hierarchical activities will be automated and/or outsourced, but I’m willing to suspend my criticism until shown otherwise.

So, still pondering this, and welcome your feedback.

2 February 2016

Organizational Knowledge Mastery?

Clark @ 8:05 am

I was pointed to a report from MIT Sloan Management talking about how big data was critical to shorten ‘time to insight’. And I think that’s a ‘good thing’ in the sense that knowing what’s happening faster is clearly going to be part of agility.  But I  must be missing something, because unless I’m mistaken, big data can’t give you the type of insights you really need.

Ok, I get it. By the ‘test and learn’ process of doing experiments and reading reactions, you can gather data quickly. And I’m all for this.  But this is largely internal, and I think the insights needed are external. And yes, the experiments can be outside the firewall, trying new things with customers and visitors and reading reactions, but that’s still in the realms of the understood or expected. How can such a process detect the disruptive influences?

Years ago, with friend and colleague Eileen Clegg, we wrote a chapter based upon her biologist husband’s work in extremophiles, looking for insight into how to survive in tough times.  We made analogies from a number of the biological phenomena, and one was the need to be more integrated with the environment, sensing changes and bringing them in. Which of course, triggered an association.

If we adapt Harold Jarche’s Personal Knowledge Mastery (or PKM), which is about Seek-Sense-Share as a mechanism to grow our own abilities, to organizations, we can see a different model.  Perhaps an OKM?  Here’s organizations seek knowledge sources, sense via experiments and reflection, and share internally (and externally, as appropriate ;).

This is partly at the core of the Coherent Organization model as well, where communities are seeking and sharing outside as ways to continue to evolve and feed the teams whose work is driving the organization forward. It’s about flows of information, which can’t happen if you’re in a Miranda Organization. And so while big data is a powerful tool, I think there’s something more required.

I think the practices and the culture of the organization are more important.  If you don’t have those right, big data won’t give big insights, and if you do, big data is just one of your tools.  Even if you’re doing experiments, it might be small data, carefully instrumented experiments targeted at getting specific outcomes, rather than big data, that will give you what you need.  But more importantly, sensing what’s going on outside, having diverse interests and a culture of curiosity is going to be the driver for the unexpected opportunities.

So yes, use the tools to hand and leverage the power of technology, but focus on motivations and culture so that the tools will be used in the important ways.  At least that was my reaction.  What’s yours?

Powered by WordPress