A tweet from Joshua Kerievsky (@JoshuaKerievsky) led me to the concept of design debt in programming. The idea is (quoting from Ward Cunningham):
Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite…. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.
I started wondering what the equivalent in learning design would be. Obviously, software design isn’t the same as learning design, though learning design could stand to benefit from what software engineers know about process and quality. For example, the Personal Software Process‘ focus on quality review and data-driven improvement could do wonders for improving individual and team learning design.
Similarly, refactoring to remove typical bad practices in programming could easily analogize to the reliable patterns we see in Broken ID. There are mistakes reliably made, and yet we don’t identify them nor have processes to systematically remedy them.
What are the consequences of these mistakes? It’s clear we often take shortcuts in our learning design, and let’s be honest, we seldom go back. For big projects, we might create iterative representations (outlines, finished storyboards), and ideally we tune them once developed, but seldom do we launch, and then reengineer based upon feedback, unless it’s heinous. Heck, we scandalously seldom even measure the outcomes with more than smile sheets!
For software engineering, the debt accrues as you continue to patch the bad code, rather than fixing it properly (paying off the principal). In learning design, the cost is in continuing to use the bad learning design. You’ve minimized the effectiveness, and consequently wasted the money it cost and the time of the learners. Another way we accrue debt is transfer learning designed for one mode, e.g. F2F delivery, and then re-implement it as elearning, synchronous or asynchronous.
In software engineering, you’re supposed to design your code in small, functional units with testable inputs and outputs, and there might be different ways of accomplishing it inside, but the important component are the testable results. Our learning equivalent would be how we address learning objectives, and of course first we have to get the objectives right, and how they build to achieve the necessary outcome, but then it shifts to getting the proper approach to meeting objectives. If we focus on the latter, it’s clear we can think about refactoring to improve the design of each component.
Frankly, our focus on process is still too much on a waterfall model that’s been debunked as an approach elsewhere. We don’t have quality controls in a meaningful way, and we don’t check to see what reliable mistakes we’re making. Maybe we need a quality process for design. I see standards, but I don’t see review. We have better and better processes (e.g. Merrill’s Ripple in a Pond), but still not seeing how we bake review and quality process into it. Seems to me we’ve still a ways to go.
Dave Ferguson says
I hadn’t heard “design debt” before, but I have heard (and spoken about) processes that are front-end loaded.
Which is just a complicated way of saying “pay me now, or pay me later.”
Your comments about quality remind me of the initial resistance to quality in the latter part of the 20th century–remember that Deming had his initial success in Japan, rather than in the U.S.
Quality seemed like extra work, the way you’ll hear critiques about “paralysis by analysis.”
Even today, I think, a fair number of organizations claim that they value quality. It’s like the cartoon I saw once of two Romans at the Colliseum, looking down at people being thrown to the lions. “You know,” says one, “I’m a Christian, too, but I’m not a fanatic about it.”
Clark says
Dave, ah yes, “pay me now or pay me later”. M’lady has two Fram oil filter cups (they look like the filters!) with that legend.
Good point about the overhead. What Watts Humphries found with PSP was that initially it slowed people down, but then it actually made them more productive, as their estimates were more accurate, and they made fewer mistakes requiring rework. Let alone the quality of the output. Reckon some peer review and systematic analysis of ways to improve would work wonders. Thanks for the feedback.
Steve Flowers says
While debateable whether I was actually programming, I have ‘some’ experience designing and building software. I see many opportunities for improvement in the ISD vertical space using well established software engineering and management practices. Here are the biggies:
1. Patterns. I’m not sure what it is in the world of the ISD. Whether we think our world of work is too complex to document problem / solution pairs or we are protective of our ideas. Either way, there’s nothing but good that can come from an establishment of a common patterns library, common associated language, and open capture and extensions of these common patterns. Did some presentations and wrote some position papers with Ian Douglas at FSU. He’s a big proponent of patterns. Sadly, the same people we need to convince can’t see the promise of ‘another best practices compilation’. Sigh:)
2. Unit testing. Here’s another one that baffles me. I’ve spent a significant amount of time on the development side and in my experience the design eats up the first 75% of the resource / time budget. This is before metal is bent, before one wrench is turned, before anything is actually developed. The last 5% are for (a) QA and / or (b) Scramble to unf*#k the output – usually unsuccessfully. Not sure that this is common everywhere, but it is in government contracting and it’s complete lunacy (I railed against this mindset unsuccessfully for years). Modular abstraction and unit testing is among the many great practices in software engineering that allows large teams to accomplish seemingly impossible feats. Why we aren’t moving evaluation up front is beyond me – formative evaluation creates data points, data points correct course, course correction helps avoid abysmal failure.
3. UxD. This one is relatively new. But this field (some say horizontal intersection with design verticals) closely reflects ISD goals. The difference being that if you are a bad UxD you aren’t employed for long – in stark contrast to my experience with bad ISD. The field accession is also different. There is no DAT (Design Aptitude Test) for entry into an ISD program. If you are flashing the green, you’re in the machine… Principles can be taught, but sadly talent is non-transferrable. UxD’s move through the ranks by demonstration of aptitude. Those who make hiring decisions for ISD’s typically either don’t understand the field well enough to make a good judgment or poor performance is masked by a good interview and a team effort that doesn’t isolate what the ISD actually did. If it walks like a rant and quacks like a rant it’s probably a rant:)
We did try a few experiments to make our ID and Development processes a bit more like efficient software project management processes. How did it come out? Depends on how you define success. Too many factors to isolate, but it didn’t seem to hold and I split the scene to work as a hermit. ISD folks couldn’t understand what was wrong with the take-your-time-waterfall, plow horse in blinders approach. The technical folks didn’t understand the unique process elements and considerations within the performance engineering realm (warm) that differed from the software engineering realm (which feels cold to me).
As an inclusive process element definition, there’s nothing wrong with ADDIE. I’m not sure how else you would distill the neck up activities that contribute to success. The problem is the way it’s leveraged. Big A, NO I… or Big D, little D, tiny I and a sprinkle of E if you’re lucky at the very end. I say stratify A on multiple tiers and move the E up front… Reach cogent hypothesis faster, sprint, test, correct.
There is plenty that we can learn from the software engineering crowd. There’s also plenty of practices that simply DO NOT fit. In each case, the biggest problem we have is personnel selection / preparation. Without changing how we pick and prepare our ISD crop, we’re in the vat of sadness for the long haul – the minority of superstars and visionaries can’t stem the tide of baddies:(
Clark says
Steve, as I taught interface design while researching learning technology, I’ve often tried to cross-fertilize, but I often found UxD to be ahead of ID (which I think you’re also saying). I’d be curious what you think *doesn’t* fit. My one response would be that creating learning experience, is different than creating task ability.
Thanks for the feedback!
Steve F says
One example: OOP, the way software engineers view OOP, doesn’t fit learning design. Too much cold rigidity and might need it in the future refactoring. This was part of the driving concept behind SCORM. SCORM didn’t come out as intended in my view.
Dave Ferguson says
I love “might-need-it-in-the-future refactoring.” It’s got me imagining the engineering or design equivalent of the person whose apartment is hip-deep in stacks of old newspapers, bread-bag ties, every receipt ever issued. And a large number of cats, many of them still alive.
John Schulz says
Clark,
Once again, thanks for putting a name to concerns I’ve had for years. I can’t even begin to tell you about the debt I was lugging around at some organizations. Somehow, validating root cause and documenting projects always seemed to get cut from project plans in the rush to meet some deadline. Always enjoyable to unravel spaghetti code after its creator has left the organization.
Steve – I always find your comments enlightening. While I agree with all of your points, I especially connect on your comments about talent selection. I had actually included this very idea in a presentation I recently did regarding mistakes organizations make with regard to eLearning. Before tapping in to all of the wonderful brain power I am finding on Twitter, I thought that maybe ‘we’ were just getting lazy. That IDs had given in to the corporate demand for ‘NOW’ at the expense of producing great products.
But after thinking about Cynefin (thanks, Clark!), Dryfus, and a little light reading around cognitive science, I came to your very conclusion (I even drafted a causal loop!). Our courseware design (and the very fact that we can’t think of anything BUT COURSEware) is being driven by the fact that we have few experts in our field. David Merrill suggested that 95% of ID is done by people without ID education (or ‘by assignment’ as he called it). Cammy Bean’s informal survey indicated that 60% of us don’t have formal ID degrees. Without this education (through formal University programs, or professional development), and without a personal drive to become an expert, most of us only know learning theory as a set of bullet points from a conference presentation. Let’s not even get in to our lack of skills required for software development, UxD, assessment, visual design, etc. (I still don’t understand why most people don’t treat elearning as a application development project. Ohh, I know … because our tools pretend that no coding is required!)
Dryfus suggested that without experts there’s no one who CAN be visionary, no one to push the envelope – or as Kathy Sierra says, create the revolutionary change that gets us over that ‘big freakin wall’. Because experts have acquired a broad-based, conceptual understanding of the domain, they are able to innovate. The rest do what we’ve always done. What we’re comfortable with. And, someday soon, someone in an industry outside of L&D is going to come along and take our work – because they’ll be able to produce product that creates meaningful results – and without any of that debt stuff.
(Didn’t want to leave you out Dave – love your stuff too!)
Clark says
Interesting thoughts, John. My short response is how do we create a framework where we can collect those expert reflections. I see them distributed across some of the top reflective practitioners, but I don’t see a systematic process whereby they can interact and refine their understanding, yet think that would accelerate the field. Am I missing something?