Last Friday’s #GuildChat was on Agile Development. The topic is interesting to me, because like with Design Thinking, it seems like well-known practices with a new branding. So as I did then, I’ll lay out what I see and hope others will enlighten me.
As context, during grad school I was in a research group focused on user-centered system design, which included design, processes, and more. I subsequently taught interface design (aka Human Computer Interaction or HCI) for a number of years (while continuing to research learning technology), and made a practice of advocating the best practices from HCI to the ed tech community. What was current at the time were iterative, situated, collaborative, and participatory design processes, so I was pretty familiar with the principles and a fan. That is, really understand the context, design and test frequently, working in teams with your customers.
Fast forward a couple of decades, and the Agile Manifesto puts a stake in the ground for software engineering. And we see a focus on releasable code, but again with principles of iteration and testing, team work, and tight customer involvement. Michael Allen was enthused enough to use it as a spark that led to the Serious eLearning Manifesto.
That inspiration has clearly (and finally) now moved to learning design. Whether it’s Allen’s SAM or Ger Driesen’s Agile Learning Manifesto, we’re seeing a call for rethinking the old waterfall model of design. And this is a good thing (only decades late ;). Certainly we know that working together is better than working alone (if you manage the process right ;), so the collaboration part is a win.
And we certainly need change. The existing approaches we too often see involve a designer being given some documents, access to a SME (if lucky), and told to create a course on X. Sure, there’re tools and templates, but they are focused on making particular interactions easier, not on ensuring better learning design. And the person works alone and does the design and development in one pass. There are likely to be review checkpoints, but there’s little testing. There are variations on this, including perhaps an initial collaboration meeting, some SME review, or a storyboard before development commences, but too often it’s largely an independent one way flow, and this isn’t good.
The underlying issue is that waterfall models, where you specify the requirements in advance and then design, develop, and implement just don’t work. The problem is that the human brain is pretty much the most complex thing in existence, and when we determine a priori what will work, we don’t take into account the fact that like Heisenberg what we implement will change the system. Iterative development and testing allows the specs to change after initial experience. Several issues arise with this, however.
For one, there’s a question about what is the right size and scope of a deliverable. Learning experiences, while typically overwritten, do have some stricture that keeps them from having intermediately useful results. I was curious about what made sense, though to me it seemed that you could develop your final practice first as a deliverable, and then fill in with the required earlier practice, and content resources, and this seemed similar to what was offered up during the chat to my question.
The other one is scoping and budgeting the process. I often ask, when talking about game design, how to know when to stop iterating. The usual (and wrong answer) is when you run out of time or money. The right answer would be when you’ve hit your metrics, the ones you should set before you begin that determine the parameters of a solution (and they can be consciously reconsidered as part of the process). The typical answer, particularly for those concerned with controlling costs, is something like a heuristic choice of 3 iterations. Drawing on some other work in software process, I’d recommend creating estimates, but then reviewing them after. In the software case, people got much better at estimates, and that could be a valuable extension. But it shouldn’t be any more difficult to estimate, certainly with some experience, than existing methods.
Ok, so I may be a bit jaded about new brandings on what should already be good practice, but I think anything that helps us focus on developing in ways that lead to quality outcomes is a good thing. I encourage you to work more collaboratively, develop and test more iteratively, and work on discrete chunks. Your stakeholders should be glad you did.
Rob Moser says
The problem with “Agile” isn’t the underlying concepts, it’s how they’re generally implemented in the workplace. Programming in hour-long sprints which are effectively judged by the amount of code you produce, all done with someone looking over your shoulder the entire time. It’s like sending your enitre programming staff to typing lessons; if they can type 20% faster, clearly they can produce 20% more useful code, right?
Chris Riesbeck says
When Kent Beck first wrote about extreme programming (XP), he was quite clear that none of the ideas were new. Certainly the goals weren’t new. Everyone knew developers should write lots of code to validate their programs — but they never had to time. Everyone knew code should be seen by multiple pairs of eyes — but only a few companies invested in time consuming code reviews. And so on. Academics in software engineering wrote long papers with theories of what develoers should do, but… well, you know.
Where agile was different was that it came from developers, who asked “OK, we know we should, but we don’t. Let’s debug. Why don’t we do what we should? What sustainable simple changes could we make in how we work that might fix those bugs?” Why don’t we write test code? We run out of time at the end of projects. OK, let’s try writing the test code first. Why don’t we meet for code reviews? We’re busy and meetings are hard to schedule. OK, let’s try programning in pairs. And so on.
Could you expand on “Learning experiences … have some stricture that keeps them from having intermediately useful results.” What stricture?
As for when to stop iterating, in agile the answer is whenever you say we need to stop. Fix the deadline and then adjust scope as you discover how things are going. I think that’s what you’re saying with picking something like 3 iterations, but the critical thing is for clients to understand that you can’t fix both time and scope, i.e., metrics to be met. One or the other. That’s the simple consequence or working on wicked problems like learning.
Clark says
Thanks for the feedback. Rob, your practical insight reminds me of this: http://thenewid.com/2015/09/01/we-need-agile.html ;)
Chris, thanks for the historical context. What I meant by strictures is that the ‘minimum viable product’ for a learning experience is, typically, of more complexity than a particular code function. It’s likely at least a practice and some resources. Though we could of course develop components independently (e.g. the practice, then elaborate). Certainly welcome counter examples (he says, pondering an open ended question that subs for an entire simulation). My open question here is given the typical nature of code sprints, and factoring down to functions, what’s the learning experience equivalent?
Appreciate the interactions (Chris, given your comments, I hope we get to cross paths some time!)
Chris Riesbeck says
Re the New ID cartoon: let’s give credit to the original version: http://dilbert.com/strip/2005-11-16
Re MVPs for learning experiences: I think relevant agile concept is not a piece of code, but a user story (https://www.mountaingoatsoftware.com/agile/user-stories). User stories define bits of end user value.
The standard user story form “as a I can in order to ” adapts very nicely to learning: “As a I can in order to .”
There are criteria for what makes good user stories (http://guide.agilealliance.org/guide/invest.html) that probably map over to learning stories as well.
Note: user stories are not requirements, just tokens for conversation, and they’re too small for planning (I prefer scenarios for that: http://www.slideshare.net/KimGoodwin/storytelling-by-design-scenarios-talk-at-confab-2011). But they are good brick-level units on which to build, IMO.
Re paths crossing: would love to meet some day.
Clark says
Chris, ok, fair enough on the original version (yikes, but hope that was likely an emergent meme wherever). Like the user story & scenario, don’t quite get the original form so also don’t get the learning variant. Just think that learning something (and yes, it’s a continuum) might be a bigger scope than meeting an in-the-moment need, and perhaps harder to create small deliverables against (though discrete learning components, e.g. example vs practice, would make sense). Again, happy to be wrong.
Chris Riesbeck says
Ah, the user story templates are incoherent because I used angle braces around the variable parts. What I originally typed was
Agile user story: As [a type of user] I can [do action with a program ] in order to [achieve a goal of mine].
Learning story: As [a category of learner] I can [do action] in order to [achieve a goal of mine].
The main difference is that agile is about adding capabilities to software while education is about adding capabilities to people. If only we could do the latter with simple coding — or perhaps not, given how that would be misused.
I didn’t see this as meeting in-the-moment needs. It’s more about thinking about skills at a smaller level, prioritized by learner-centered value.