I attended a recent Meetup of the Bay Area Learning Design & Technology, and it led to some insights. As context, this is a group that meets in the evening maybe once or every other month or so. It’s composed of students or new graduates as well as experienced-practitioners. The topic was Themes from a Hat (topics are polled and then separate discussions are held). I was tapped to host the Learning Design conversation (there were three others: LMS, Measurement, and Social Learning), and that meant that a subset of the group sat in on the discussion. We had four separate discussions for each group, so everyone had a chance to discuss every topic (except us topic hosts ;).
I’d chosen to start with 3 or four questions to prompt discussion:
- What is good learning design?
- Are you doing good learning design?
- What are the barriers to good learning design?
- What can we do to improve learning design?
In each case, we never got beyond the first question! However, in the course of the discussions, we ended up talking quite a bit about the others. I confess that I’m a just a wee bit opinionated and a stickler for conceptual clarity, so I probably spoke too much about important distinctions. Yet there were also some valuable insights from the group.
First, it was a great group: enthusiastic, with a wide range of experience and backgrounds. Folks had come into the field from different areas, everything from neuroscience to rabbinical practice! And there were new students still in a Master’s program, job seekers, and those who were active in work. Everyone contributed. While it meant missing #lrnchat, it was worthwhile to have a different experience. And everyone was kind enough to understood when I had to have my knee up as rehab (thanks!).
The responses to the first question were very interesting: what is good learning design? While most everyone talked about features of the experience, we also were talking both the outcome and the process. There even emerged a discussion about what learning was. I offered the traditional (behaviorist) description: a change in behavior in the same context, e.g. responding in a different (and presumably better) way. I also mentioned my usual: learning is action and reflection; instruction is designed action and guided reflection.
One element that appeared in all four groups was ‘engaging’. Exactly that word. (Only once did I feel compelled to mention that Engaging Learning was the title of my first book! ;) There were other terms that encompassed it, including ‘experience’, ‘stimulating’, and ‘motivating’. I was pleased to see the recognition of the value! To define it, discussion several times ranged across things like challenging practice and making it meaningful to learners.
Another element that reoccurred was ‘memorable’. It seemed what was meant was ‘retention’ (over time until needed) rather than the learning experience was worth recalling. This did bring up a discussion of what led to retention and a discussion of spaced learning. That is, the fact that our brains can only strengthen associations so much in one day before sleep is needed. Slow learning. Reactivation.
That same discussion came up with another repeated term: micro learning. There appeared to be little differentiation between different interpretations of that term, so I made distinctions (as one does ;). People too often use the term micro learning to mean looking something up just when needed (such as a video about how to do something). And that’s valuable. Yet it can lead to successful performance in the moment without any learning (e.g. forgotten shortly thereafter). Which is fine, but it’s not learning! Microlearning might be some very small thing that can be learned right in the moment, but I reckon those are rare. What I really think micro learning could and should be is for spaced learning. I think that to do that successfully is a non-trivial exercise, by the way.
We covered other topics about design, too. In at least one group we talked about SME limitations and how to work with them. We also talked about the benefits of collaboration, and knowing your audience. And engaging the audience, making the learning meaningful to them and the organization. Minimalism came up in several different ways as well, not wasting the learner’s time.
One question had arisen in discussion with colleagues, and I took the opportunity in a couple groups to ask about their design practices. The question was how frequent was the process of giving a course demand to a designer and having them work alone from go to whoa. It varied, but it seemed like there was some of that, there was also a fair bit of both collaboration at least at certain points, and some iterative testing. This was heartening to hear! Doing performance consulting and meaningful measurement, however, did appear somewhat challenging.
Overall, there’s an opportunity for some deeper science behind elearning, yet I was very heartened by the enthusiasm and that the design processes weren’t as ‘solitary waterfall’ as I feared. So, who’s up for a deeper learning science workshop? ;)