The official opening event to kickoff ATD’s International Conference, was our 44th President, Barack Obama. Prompted by questions from Tony Bingham, he eloquently addressed education, values, and more. Thoughtful, witty, and ultimately wise, an inspiring session.
Hard Fun Projects
As a basic premise of my book on designing engaging learning, I maintain that learning can, and should, be ‘hard fun’. When you look at learning and engagement, you find this perfect alignment of elements. And, it occurred to me, that’s also true for good project work. And here I don’t just mean coursework assignments (though that too fits), but organizational innovation should also be hard fun!
As I’ve stated before in various places, when you’re designing new solutions, problem-solving, trouble-shooting, doing research, etc, you don’t know the answer when you begin. Therefore you’re learning when you do so! It’s not formal learning, it’s informal, but it’s still learning. So what works in learning should make sense for innovation too.
And in learning, the alignment I found between elements of effective education and engaging learning make sense. Both require (amongst others):
- clear goals
- appropriate challenge
- meaningfulness of the problem to the context
- meaningfulness to the learners
- experimentation
- feedback
And those also define a meaningful project for solving in the workplace.
That is, first you need to have a clear goal. The size and scope of the task should be within the reach, but not the grasp, of the team. The project has to have a clear benefit to the organization. And the team should be appropriately constituted with skills and committed to the project. The methods required for the innovation will be experimentation and feedback. Of course, you also need diversity on the team, safety to experiment, accountability for the results. (Which is helpful for formal learning too!)
We can, and should, be setting up our projects to meet these criteria. We get better outcomes, research tells us. That not only includes the product of the work, but team engagement as well. This is also a possible start to creating a culture of experimentation and continual learning. Which also has long-term upsides.
This came to me because I was asked in an interview what were the most fun projects I’d done. I realized that working with folks together to address problems, like when I led a team to develop an adaptive learning system, fit the bill. And that’s work I love, whether having a group together to collectively work out better design processes or performance and development strategy. Folks who’ve worked with me similarly have found it valuable. So who’s up for some ‘hard fun’?
Myths in one week…
Next week, I’ll be presenting on myths at ATD’s International Conference (Tues, 1PM). Moreover, there’ll be a book signing at 4PM! I hope to see you there, and, for more reasons than you might first imagine.
For one, ATD’s supposed to be supplying me with special bookmarks. Always nice to have a book mark specific to the book, I reckon. I haven’t seen them yet, but if they’re leveraging the cool design work of Fran Fernandez they used for the book, they’ll be great. But wait, there’s more…
I’ve also arranged for some special ‘Debunk’ badge ribbons. These limited edition collectors items (*ahem* :) are available to those who can show me their copy of the book (digital or print). It’s to proudly wear on your badge showing you’re fighting on the side of learning science. (As to the pic: the ribbon was not supposed to be ‘cantelope’. Fortunately, the company is making it right so these ones have gone back. They will be orange, and the print and design will be the same. Fingers crossed they arrive in time!)
There are other ways to find out more. You can of course buy the book; either through Amazon (Kindle too) or via ATD (PDF too). (Rumor has it that using the code ‘SPRINGBOOKS18’ at ATD will get you 10% off!)
Of course, there’s the ATD webinar for members on May 24th at 11AM PT (2PM ET). There’ll be one for the Debunker Club on June 6th at 10AM PT (1PM ET), details forthcoming. Other webinars are in the works, so stay tuned.
And there’ll be interviews. Also forthcoming. Yes, I’m trying to get the word out, but it’s for a good cause: better learning!
So, if you’re going to ICE, please do say hello (and safe travels). I know San Diego (and love it): undergrad and grad school at UCSD, and brother still lives there, so I visit a lot. My recommendations: fish tacos (Rubios is a safe bet), carnitas (e.g. Old Town Mexican Cafe), and carne asada burritos (but only at a taqueria, not at a restaurant). There are some great local brews; Stone, Pizza Port, and Ballast Point all make a good drop. Also, margaritas if you can get them made properly, not with a mixer. Hope to see you there!
SMEs for Design
In thinking through my design checklist, I was pondering how information comes from SMEs, and the role it plays in learning design. And it occurred to me visually, so of course I diagrammed it.
The problem with getting design guidance from SMEs is that they literally can’t tell us what they do! The way our brains work, our expertise gets compiled away. While they can tell us what they know (and they do!), it’s hard to get what really needs to be understood. So we need a process.
My key is to focus on the decisions that learners will be able to make that they can’t make now. I reckon what’s going to help organizations is not what people know, but how they can apply that to problems to make better choices. And we need SMEs who can articulate that. Which isn’t all SMEs!
That also means that we need models. Information that helps guide learners’ performance while they compile away their expertise. Conceptual models are the key here; causal relationships that can explain what did happen or predict what will happen, so we can choose the outcomes we want. And again, not all SMEs may be able to do this part.
There’s also other useful information SMEs can give us. For one, they can tell us where learners go wrong. Typically, those errors aren’t random, but come from bringing in the wrong model. It would make sense if you’re not fully on top of the learning. And, again we may need more than one SME, as sometimes the theoretical expert (the one who can give us models and/or decisions) isn’t as in tune with what happens in the field, and we may need the supervisor of those performers.
Then, of course, there are the war stories. We need examples of wins (and losses). Ideally, compelling ones (or we may have to exaggerate). They should be (or end up) in the form of stories, to facilitate processing (our brains are wired to parse stories). Of course, after we’re done they should refer to the models, and show the underlying thinking, but that may be our role (and if that’s hard, maybe we either have the wrong story or the wrong model).
Finally, there’s one other way experts can assist us. They’ve found this topic interesting enough to spend the years necessary to be the experts. Find out why they find it so fascinating! Then of course, bake that in.
And it makes sense to gather the information from experts in this order. However, for learning, this information plays roles in different places. To flip it around, our:
- introductions need to manifest that intrinsic interest (what will the learners be able to do that they care about?)
- concepts need to be presenting those models
- examples need to capture those stories
- practice need to embed the decisions and
- practice needs to provide opportunities to exhibit those misconceptions before they matter
- closing may also reference the intrinsic experience in closing the emotional experience
That’s the way I look at it. Does this make sense to you? What am I missing?
Real (e)Learning Heroes
While there are people who claim to be leaders in elearning (and some are), there is another whole group who flies under the radar. These are the people who labor quietly in the background on initiatives that will benefit all of us. I’m thinking in particular of those who work to advance our standards. And they’re really heroes for what they’ve done and are doing.
The initial learning technology standards came out from the AICC. They wanted a way to share important learning around flight, an area with a big safety burden. Thus, they were able to come together despite competition.
Several other initiatives include IEEE (which is pretty much the US based effort on electric and electronic technology standards to the international stage), and the IMS efforts from academia. They were both working on content/management interoperability, when the US government put it’s foot down. The Department of Defense’s ADL initiative decided upon a version, to move things forward, and thus was born SCORM.
Standards are good. When standards are well-written, they support a basis upon which innovation can flourish. Despite early hiccups, and still some issues, the ability for content written to the standards to run on any compliant platform, and vice versa, has enabled advancements. Well, except for those who were leveraging proprietary standards. As a better example, look how the WWW standard on top of the internet standards has enabled things like, well, this blog!
Ok, so it’s not all roses. There are representatives who, despite good intentions, also have vested interests in things going in particular directions. Their motivations might be their employers, research, or other agendas. But the process, the mechanisms that allow for decision making, usually end up working. And if not, there’s always the ADL to wield the ‘800 lb gorilla’ argument.
Other initiatives include xAPI, sponsored by ADL to address gaps in SCORM. This standard enables tracking and analytics beyond the course. It’s no panacea, but it’s a systematic way to accomplish a very useful goal. Ongoing is the ICICLE work on establishing principles for ‘learning engineering’, and IBSTPI for training, performance, and instruction. Similarly, societies such as ATD and LPI try to create standards for necessary skills (their lists are appendices in the Revolution book).
And it’s hard work! Trying to collect, synthesize, analyze, and fill in gaps to create a viable approach requires much effort both intellectual and social! These people labor away for long hours, on top of their other commitments in many cases. And in many cases their organizations support their involvement, for good as well as selfish reasons such as being first to be able to leverage the outputs.
These people are working to our benefit. It’s worth being aware, recognizing, and appreciating the work they do. I certainly think of them as heroes, and I hope you will do so as well.
It’s here!
So, as you (should) know, I’ve written a book debunking learning myths. Of course, writing it, and getting your mitts on it are two different things! I’ve been seeing my colleagues (the ones kind enough to write a blurb for it) showing off their copies, and bemoaning that mine haven’t arrived. Well, that’s now remedied, it’s here! (Yay!) And in less than a week will be the official release date!
My publishing team (a great group) let me know that they thought it was a particularly nice design (assuring me that they didn’t say that to all the authors ;), and I have to say it looks and feels nice. The cover image and cartoons that accompany every entry are fun, too (thanks, Fran Fernandez)! It’s nicely small, yet still substantial. And fortunately they kept the price down.
You can hear more about the rationale behind the work in a variety of ways:
I’ll be doing a webinar for the Asia Pacific region on Thursday 19 Apr (tomorrow!) 6PM PT (9AM Friday Singapore Time).
I’ll be presenting at ATD’s International Conference in San Diego on Tuesday, May 8th at 1PM.
(There will also be a book signing in the conference book store at 4PM. Come say hi!)
There’s a webinar for ATD on May 24th at 11AM PT (2PM PT).
Another webinar, for the Debunker Club (who contributed) on June 6 at 10AM PT, 1PM ET. Details still to come.
Also, Connie Malamed has threatened to interview me, as have Learnnovators. Stay tuned.
So, you’ve no excuse not to know about the problem! I’d feel a bit foolish about such publicity, if the cause weren’t so important. We need to be better at using learning science. Hope to see you here, there, or around.
Plagiarism and ethics
I recently wrote on the ethics of L&D, and I note that I didn’t address one issue. Yet, it’s very clear that it’s still a problem. In short, I’m talking about plagiarism and attribution. And it needs to be said.
In that article, I did say:
That means we practice ethically, and responsibly. We want to collectively determine what that means, and act accordingly. And we must call out bad practices or beliefs.
So let me talk about one bad practice: taking or using other people’s stuff without attribution. Most of the speakers I know can cite instances when they’ve seen their ideas (diagrams, quotes, etc) put up by others without pointing back to them. There’s a distinction between citing something many people are talking about (innovation, microlearning, what have you) with your own interpretation, and literally taking someone’s ideas and selling them as your own.
One of our colleagues recently let me know his tools had been used by folks to earn money without any reimbursement to him (or even attribution). Others have had their diagrams purloined and used in presentations. One colleague found pretty much his entire presentation used by someone else! I myself have seen my writing appear elsewhere without a link back to me, and I’m not the only one.
Many folks bother to put copyright signs on their images, but I’ve stopped because it’s too easy to edit out if you’re halfway proficient with a decent graphics package. And you can do all sorts of things to try to protect your decks, writing, etc, but ultimately it’s very hard to protect, let alone discover that it’s happening. Who knows how many copies of someone’s images have ended up in a business presentation inside a firm! People have asked, from time to time, and I have pretty much always agreed (and I’m grateful when they do ask). Others, I’m sure, are doing it anyway.
This isn’t the same as asking someone to work for free, which is also pretty rude. There are folks who will work for ‘exposure’, because they’re building a brand, but it’s somewhat unfair. The worst are those who charge for things, like attendance or membership, or organizations who make money, yet expect free presentations! “Oh, you could get some business from this.” The operative word is ‘could’. Yet they are!
Attribution isn’t ‘name dropping‘. It’s showing you are paying attention, and know the giants whose shoulders you stand on. Taking other people’s work and claiming it as your own, particularly if you profit by it, is theft. Pure and simple. It happens, but we need to call it out. Calling it out can even be valuable; I once complained and ended up with a good connection (and an apology).
Please, please, ask for permission, call out folks who you see are plagiarizing, and generally act in proper ways. I’m sure you are, but overall some awareness raising still needs to happen. Heck, I know we see amazing instances in people’s resumes and speeches of it, but it’s still not right. The people in L&D I’ve found to be generally warm and helpful (not surprisingly). A few bad apples isn’t surprising, but we can do better. All I can do is ask you to do the right thing yourself, and call out bad behavior when you do see it. Thanks!
Tools and Design
I’ve often complained about how the tools we have make it easy to do bad design. They make it easy to put PPTs and PDFs on the screen and add a quiz. And not that that’s not so, but I decided to look at it from the other direction, and I found that instructive. So here’re some thoughts on tools.
Authoring tools, in general, are oriented on a ‘page’ metaphor; they’re designed to provide a sequence of pages. The pages can contain a variety of media: text, audio, video. And then there are special pages, the ones where you can interact. And, of course, these interactions are the critical point for learning. It’s when you have to act, to do, that you retrieve and apply knowledge, that learning really happens.
What’s critical is what you do. If it’s just answering knowledge questions, it’s not so good. If it’s just ‘click to see more’, it’s pretty bad. The critical element is being faced with a decision about an action to take, then apply the knowledge to discriminate between the alternatives, and make a decision. The learner has to commit! Now, if I’m complaining about the tools making it easy to do bad things, what would be good things?
That was my thinking: what would be ideal for tools to support? I reasoned that the interactions should represent things we do in the real world. Which, of course, are things like fill in forms, write documents, fill out spreadsheets, film things, make things. And these are all done through typical interactions like drag, drop, click, and more.
Which made me realize that the tools aren’t the problem! Well, mostly; click to see more is still problematic. Deciding between courses of action can be done as just a better multiple choice question, or via any common form of interaction: drag/drop, reorder, image click, etc. Of course, branching scenarios are good too, for so-called soft skills (which are increasingly the important things), but tools are supporting those as well. The challenge isn’t inherent in the tool design. The challenge is in our thinking!
As someone recently commented to me, the problem isn’t the tools, it’s the mindset. If you’re thinking about information dump and knowledge test, you can do that. If you’re thinking about putting people into place to made decisions like they’ll need to make, you can do that. And, of course, provide supporting materials to be able to make those decisions.
I reckon the tool vendors are still focused on content and a quiz, but the support is there to do learning designs that will really have an impact. We may have to be a bit creative, but the capability is on tap. It’s up to (all of) us to create design processes that focus on the important aspects. As I’ve said before, if you get the design right, there are lots of ways to implement it!
New and improved evaluation
A few years ago, I had a ‘debate’ with Will Thalheimer about the Kirkpatrick model (you can read it here). In short, he didn’t like it, and I did, for different reasons. However, the situation has changed, and it’s worth revisiting the issue of evaluation.
In the debate, I was lauding how Kirkpatrick starts with the biz problem, and works backwards. Will attacked that the model didn’t really evaluate learning. I replied that it’s role wasn’t evaluating the effectiveness of the learning design on the learning outcome, it was assessing the impact of the learning outcome on the organizational outcome.
Fortunately, this discussion is now resolved. Will, to his credit, has released his own model (while unearthing the origins of Kirkpatrick’s work in Katzell’s). His model is more robust, with 8 levels. This isn’t overwhelming, as you can ignore some. Fortunately, there’re indicators as to what’s useful and what’s not!
It’s not perfect. Kirkpatrick (or Katzell? :) can relatively easily be used for other interventions (incentives, job aids, … tho’ you might not tell it from the promotional material). It’s not so obvious how to do so with his new model. However, I reckon it’s more robust for evaluating learning interventions. (Caveat: I may be biased, as I provided feedback.) And should he have numbered them in reverse, which Kirkpatrick admitted might’ve been a better idea?
Evaluation is critical. We do some, but not enough. (Smile sheets, level 1, where we ask learners what they think of the experience, has essentially zero correlation with outcomes.) We need to do a better job of evaluating our impacts (not just our efficiency). This is a more targeted model. I draw it to your attention.
P&D Strategies
In an article, Jared Spool talks about the strategies he sees UX leaders using. He lists 3 examples, and talks about how your strategies need to change depending on where you are in relation to the organization. It made me think about what P&D strategies could and should be.
So, one of the ones he cited isn’t unique to UX, of course. That one is ‘continual mentoring’, always having someone shadowing the top person in a role. He suspects that it might slow things down a bit, but the upside is a continual up-skilling. Back when I led a team, I had everyone have an area of responsibility, but someone backed them up. Cynically, I called it the ‘bus’ strategy, e.g. in case someone was hit by a bus. Of course, the real reason was to account for any variability in the team, to create some collaborative work, to share awareness, and to increase the team’s understanding.. This is an ‘internal’ strategy.
He cites another, about ‘socializing’ the vision. In this one, you are collectively creating the five year vision of what learning (his was UX) looks like. The point is to get a shared buy-in to a vision, but also promotes the visibility of the group. Here again, this is hardly unique to UX, with a small twist ;). This is more an external strategy. I suppose there could be two levels of ‘external’, outside P&D but inside the organization, and then an external one (e.g. with customers).
I’d add that ‘work out loud’ (aka Show Your Work) would be another internal strategy (at least to begin with). Here, the P&D team starts working out loud, with the unit head leading the way. It both gets the P&D team experimenting with the new ways to do things, of course builds shared awareness, and builds a base to start evangelizing outside.
I’d love to hear the strategies you’ve used, seen used, or are contemplating, to continue and expand your ability to contribute to the organization. What’s working? And, for that matter, what’s not?