Donald Norman’s book, The Design of Everyday Things is a must-read for anyone who creates artifacts or interfaces for humans. This one goes forward in the same vein, but talking about how new tech in the roughly 20 years since that book came out, and the implications. There are some interesting thoughts, though few hints for learning.
In the book, Don talks about how new technologies are increasingly smart, e.g. cars are almost self-driving (and since the book was published back in 2007, they’re now already on the cusp). As a consequence, we have to start thinking deeply about when and where to automate, having technologies make decisions, versus when we’re in the loop. And, in the latter case, when and how we’re kept alert (pilots lose attention trying to monitor an auto-pilot, even falling asleep).
The issue, he proposes, is that tenuous relationship between an aware partner and the human. He uses the relationship between a horse and rider as an example, talking about loose-rein control and close-rein control. Again, there are times the rider can be asleep (I recall a gent in an Irish pub bemoaning the passing of the days when “the horse knew the way home”).
He covers a range of data points from existing circumstances as well as experiments in new approaches. This ranges from noise to crowd behavior. For noise, he looks at how the way mechanical things made noises were clues to their state and operation, and that we’re losing those clues as we increasingly make things quiet. Engineers are even building in noise as a feature when it’s disappeared via technical sophistication. For crowd behavior, one example is how the removal of street signs in a couple of cities have reduced accidents.
At the end, he comes up with a set of design principles:
- Provide rich, complex, and natural signals
- Be predictable
- Provide a good conceptual model
- Make the output understandable
- Provide continual awareness, without annoyance
- Exploit natural mapping to make interaction understandable and effective
For learning, he talks about how robots that teach are one place in which such animated and embodied avatars make sense, whereas in may situations they’re more challenging. He talks about how they don’t need much mobility, can speak, and can be endearing. Not to replace teachers, but to supplement them. Certainly we have the software capability, but we have to wonder what sort of system makes sense to invest in the actual embodiment versus speaking from a mobile device or computer.
As an exercise, I looked at his design principles to see what might transfer over to the design of learning experiences. The main issue is that in learning, we want the learner facing problems, focusing on the task of creating a solution with overt cognitive awareness, as opposed to an elegant, almost unconscious, accomplishment of a goal. This suggests that rule 2, ‘be predictable’, might be good in non-critical areas of focus, but not in the main area. The rest seem appropriate for learning experiences as well.
This is a thoughtful book, weaving a number of elements together to capture a notion, not hammer home critical outcomes. As such, it is not for the casual designer, but for those looking to take their design to the ‘next level’, or consider the directions that will be coming, and how we might prepare people for them. Just as Don proposed that the interface design folks should be part of the product design team in The Invisible Computer, so too should the product support specialists, sales training team, and customer training designers be part of the design team going forward, as the considerations of what people will have to learn to use new systems are increasingly a concern in the design of systems, not just products.
Leave a Reply