Robert Ballard gave a personal and inspiring tale of exploring the world’s oceans and using technology to broaden reach.
Norman’s Design of Future Things
Donald Norman’s book, The Design of Everyday Things is a must-read for anyone who creates artifacts or interfaces for humans. This one goes forward in the same vein, but talking about how new tech in the roughly 20 years since that book came out, and the implications. There are some interesting thoughts, though few hints for learning.
In the book, Don talks about how new technologies are increasingly smart, e.g. cars are almost self-driving (and since the book was published back in 2007, they’re now already on the cusp). As a consequence, we have to start thinking deeply about when and where to automate, having technologies make decisions, versus when we’re in the loop. And, in the latter case, when and how we’re kept alert (pilots lose attention trying to monitor an auto-pilot, even falling asleep).
The issue, he proposes, is that tenuous relationship between an aware partner and the human. He uses the relationship between a horse and rider as an example, talking about loose-rein control and close-rein control. Again, there are times the rider can be asleep (I recall a gent in an Irish pub bemoaning the passing of the days when “the horse knew the way home”).
He covers a range of data points from existing circumstances as well as experiments in new approaches. This ranges from noise to crowd behavior. For noise, he looks at how the way mechanical things made noises were clues to their state and operation, and that we’re losing those clues as we increasingly make things quiet. Engineers are even building in noise as a feature when it’s disappeared via technical sophistication. For crowd behavior, one example is how the removal of street signs in a couple of cities have reduced accidents.
At the end, he comes up with a set of design principles:
- Provide rich, complex, and natural signals
- Be predictable
- Provide a good conceptual model
- Make the output understandable
- Provide continual awareness, without annoyance
- Exploit natural mapping to make interaction understandable and effective
For learning, he talks about how robots that teach are one place in which such animated and embodied avatars make sense, whereas in may situations they’re more challenging. He talks about how they don’t need much mobility, can speak, and can be endearing. Not to replace teachers, but to supplement them. Certainly we have the software capability, but we have to wonder what sort of system makes sense to invest in the actual embodiment versus speaking from a mobile device or computer.
As an exercise, I looked at his design principles to see what might transfer over to the design of learning experiences. The main issue is that in learning, we want the learner facing problems, focusing on the task of creating a solution with overt cognitive awareness, as opposed to an elegant, almost unconscious, accomplishment of a goal. This suggests that rule 2, ‘be predictable’, might be good in non-critical areas of focus, but not in the main area. The rest seem appropriate for learning experiences as well.
This is a thoughtful book, weaving a number of elements together to capture a notion, not hammer home critical outcomes. As such, it is not for the casual designer, but for those looking to take their design to the ‘next level’, or consider the directions that will be coming, and how we might prepare people for them. Just as Don proposed that the interface design folks should be part of the product design team in The Invisible Computer, so too should the product support specialists, sales training team, and customer training designers be part of the design team going forward, as the considerations of what people will have to learn to use new systems are increasingly a concern in the design of systems, not just products.
Experience, the API
Last week I was on a panel about the API previously known as Tin Can at #DevLearn, and some thoughts crystallized. Touted as the successor to SCORM, it’s ridiculously simple: Subject Verb Object: e.g. “I did this”, such as ‘John Doe read Engaging Learning’ but also ‘Jane Doe took this picture’. And this has interesting implications.
First, the API itself is very simple, and while it can be useful on it’s own, it’ll be really useful when there’re tools around it. It’s just a foundation upon which things can be done. There’ll need to be places to record these actions, and ones to pull together sequences of recommendations for learning paths, and more. You’ll want to build portfolios of what you’ve done (not just what content you’ve touched).
But it’s about more than learning. These can cross accessing performance support resources, actions in social media systems, and more. This person touched that resource. That person edited this file. This other person commented.
One big interesting opportunity is to be able to start mining these. We can start looking at evidence of what folks did and finding good and bad outcomes. It’s a consistent basis for big data and analytics. It’s also a basis to start customizing: if the people who touched this resource were better able to solve problem X, other people with that problem maybe should also touch it. If they’ve already tried X and Y, we can next recommend Z. Personalization/customization.
An audience member asked what they should take back to their org, and who needed to know what. My short recommendations:
Developers need to start thinking about instrumenting everything. Everything people touch should report out on their activity. And then start aggregating this data. Mobile, systems, any technology touch. People can self report, but it’s better to the extent that it’s automated.
Managers need to recognize that they’re going to have very interesting opportunities to start tracking and mining information as a basis to start understanding what’s happening. Coupled with rich other models, like of content (hence the need for a content strategy), tasks, learners, we can start doing more things by rules.
And designers need to realize, and then take advantage of, a richer suite of options for learning experiences. Have folks take a photo of an example of X. You can ask them to discuss Y. Have them collaborate to develop a Z. You could even send your learners out to do a flash mob ;).
Learning is not about content, it’s about experience, and now we have ways to talk about it and track it. It’s just a foundation, just a standard, just plumbing, just a start, but valuable as all that.
Gary Woodill #mobilearnasia Keynote Mindmap
Gary Woodill gave a broad reaching keynote covering the past, present, and future of mobile learning. Peppered with great examples and good thinking, it was an illuminating kickoff to the MobiLearnAsia conference.
Beyond eBooks
Among the things I’ve been doing lately is talking to folks who’ve got content and are thinking about the opportunities beyond books. This is a good thing, but I think it’s time to think even further. Because, frankly, the ebook formats are still too limited.
It’s no longer about the content, it’s about the experience. Just putting your content onto the web or digital devices isn’t a learning solution, it’s an information solution. So I’m suggesting transcending putting your content online for digital, and starting to think about the opportunities to leverage what technology can do. It started with those companion sites, with digital images, videos, audios, and interactives that accompany textbooks, but the opportunities go further.
We can now embed the digital media within ebooks. Why ebooks, not on the web? I think it’s primarily about the ergonomics. I just find it challenging to read on screen. I want to curl up with a book, getting comfortable.
However, we can’t quite do what I want with ebooks. Yes, we can put in richer images, digital audio, and video. The interactives part is still a barrier, however. The ebook standards don’t yet support it, though they could. Apple’s expanded the ePub format with the ability to do quick knowledge checks (e.g. true/false or multiple choice questions). There’s nothing wrong with this, as far as it goes, but I want to go further.
I know a few, and sure that there are more than a few, organizations that are experimenting with a new specification for ePub that supports richer interaction, more specifically pretty much anything you can do with HTML 5. This is cool, and potentially really important.
Let me give you a mental vision of what could be on tap. There’s an app for iOS and Android called Imaginary Range. It’s an interesting hybrid between a graphic novel and a game. You read through several pages of story, and then there’s an embedded game you play that’s tied to, and advances, the story.
Imagine putting that into play for learning: you read a graphic novel that’s about something interesting and/or important, and then there’s a simulation game embedded where you have to practice the skills. While there’s still the problem with a limited interpretation of what’s presented (ala the non-connectionist MOOCs), in well-defined domains these could be rich. Wrapping a dialog capability around the ebook, which is another interesting opportunity, only adds to the learning opportunity.
I’ll admit that I think this is not really mobile in the sense of running on a pocketable, but instead it’s a tablet proposition. Still, I think there’s real value to be found.
Top 10 Tools for Learning
Among the many things my colleague Jane Hart does for our community is to compile the Top 100 Tools for learning each year. I think it’s a very interesting exercise, showing how we ourselves learn, and the fact that it’s been going on for a number of years provides interesting insight. Here are my tools, in no particular order:
WordPress is how I host and write this Learnlets blog, thinking out loud.
Keynote is how I develop and communicate my thinking to audiences (whether I eventually have to port to PPT for webinars or not).
Twitter is how I track what people find interesting.
Facebook is a way to keep in touch with a tighter group of people on broader topics than just learning. I’m not always happy with it, but it works.
Skype is a regular way to communicate with people, using a chat as a backchannel for calls, or keeping open for quick catch ups with colleagues. An open chat window with my ITA colleagues is part of our learning together.
OmniGraffle is the tool I use to diagram, one of the ways I understand and communicate things.
OmniOutliner often is the way I start thinking about presentations and papers.
Google is my search tool.
Word is still the way I write when I need to go industrial-strength, getting the nod over Pages because of it’s outlining and keyboard shortcuts.
GoodReader on the qPad is the way I read and markup documents that I’m asked to review.
That’s 10, so I guess I can’t mention how I’ve been using Graphic Converter to edit images, or GoToMeeting as the most frequent (tho’ by no means the only) web conferencing environment I’ve been asked to use.
I exhort you to also pass on your list to Jane, and look forward to the results.
The Tablet Proposition
RJ Jacquez asks the question “is elearning on tablets really mlearning“. And, of course, the answer is no, elearning on tablets is just elearning, and mlearning is something different. But it got me to thinking about where tablets do fit in the mlearning picture, in ways that go beyond what I’ve said in the past.
I wasn’t going to bother to say why I answered no before I get to the point of my post, but then I noticed that more than half of the respondents say it is, (quelle horreur), so I’ll get that out of the way first. If your mobile solution isn’t doing something unique because of where (or when) you are, if it’s not doing something unique to the context, it’s not mlearning. Using a tablet like a laptop is not mlearning. If you’re using it to solve problems in your location, to access information you need here and now, it’s mobile, whether pocketable or not. That’s what mlearning is, and it’s mostly about performance support, or contextualized learning augmentation, it’s not about just info access in convenience.
Which actually segues nicely into my main point. So let’s ask, when would you want a tablet instead of a pocketable when you’re on the go? I think the answer is pretty clear: when you need more information or interactivity than a pocketable can handle, and you’re not as concerned about space.
Taking the first situation: there are times when a pocketable device just can’t cope with the amount of screen real estate you need. If you need a rich interaction to establish information: numerous related fields or a broad picture of context, you’re going to be hard pressed to use a pocketable device. You can do it if you need to, with some complicated interface design, but if you’ve the leeway, a tablet’s better.
And that leeway is the second point: if it’s not running around from cars to planes, but instead either on a floor you’re traversing in a more leisurely or systematic way, or in a relatively confined space, a tablet is going to work out fine. The obvious places in use are hospitals or airplane cockpits, but this is true of factory floors, restaurants, and more.
There is a caveat: if large amounts of text need to be captured, neither a pocketable nor a tablet are going to be particularly great. Handwriting capture is still problematic, and touchscreen keyboards aren’t industrial strength text entry solutions. Audio capture is a possibility, but the transcription may need editing. So, if it’s keyboard input, use something with a real keyboard: netbook or laptop.
So, that’s my pragmatic take on when tablets take over from pocketables. I take tablets to meetings and when seated for longer periods of time, but it’s packed when I’m hopping from car to plane, on a short shopping trip, etc. It’s about tradeoffs, and your tradeoff, if you’re targeting one device, will be mobility versus information. Well, and text.
The point is to be systematic and strategic about your choice of devices. Opportunism is ok, but unexamined decisions can bite you. Make sense?
HyperCard reflections #hypercard25th
It’s coming up to the 25th anniversary of HyperCard, and I’m reminded of how much that application played a role in my thinking and working at the time. Developed by Bill Atkinson, it was really ‘programming for the masses’, a tool for the Macintosh that allowed folks to easily build simple, and even complex, applications. I’d programmed in other environments : Algol, Pascal, Basic, Forth, and even a little Lisp, but this was a major step forward in simplicity and power.
A colleague of mine who was working at Claris suggested how cool this new tool was going to be, and I taught myself HyperCard while doing a postdoc at the University of Pittsburgh’s Learning Research and Development Center. I used it to prototype my ideas of a learning tool we could use for our research on children’s mental models of science. I then used it to program a game based upon my PhD research, embedding analogical reasoning puzzles into a game (Voodoo Adventure; see screenshot). I wrote it up and got it published as an investigation how games could be used as cognitive research tools. To little attention, back in ’91 :).
While teaching HCI, I had my students use HyperCard to develop their interface solutions to my assignments. The intention was to allow them to focus more on design and less on syntax. I also reflected on how the interface encapsulated to some degree on what Andi diSessa called ‘incremental advantage’, a property of an environment that rewarded greater investments in understanding with greater power to control the system. HyperCard’s buttons, fields, and backgrounds provided this, up until the next step to HyperTalk (which also had that capability once you got into the programming notion). I also proposed that such an environment could support ‘discoverability’ (a concept I learned from Jean Marc Robert), where an environment could support experimentation to learn to use it in steady ways. Another paper resulted.
I also used HyperCard to develop applications in my research. We used it to develop Quest for Independence, a game that helped kids who grew up without parents (e.g. foster care) learn to survive on their own. Similarly, we developed a HCI performance support tool. Both of these later got ported to the web as soon as CGI’s came out that let the web retain state (you can still play Quest; as far as I know it was the first serious game you could play on the web).
The other ways HyperCard were used are well known (e.g. Myst), but it was a powerful tool for me personally, and I still miss having an easy environment for prototyping. I don’t program anymore (I add value other ways), but I still remember it fondly, and would love to have it running on my iPad as well! Kudos to Bill and Apple for creating and releasing it; a shame it was eventually killed through neglect.
mLearning 3.0
Robert Scoble has written about Qualcomm’s announcement of a new level of mobile device awareness. He characterizes the phone transitions from voice (mobile 1.0) to tapping (2.0) to the device knowing what to do (3.0). While I’d characterize it differently, he’s spot on about the importance of this new capability.
I’ve written before about how the missed opportunity is context awareness, specifically not just location but time. What Qualcomm has created is a system that combines location awareness, time awareness, and the ability to build and leverage a rich user profile. Supposedly, according to Robert, it’s also tapped into the accelerometer, altimeter, whatever sensors there are. It’ll be able to know in pretty fine detail a lot more about where you are and doing.
Gimbal is mostly focused on marketing (of course, sigh), but imagine what we could do for learning and performance support!
We can now know who you are and what you’re doing, so:
- a sales team member visiting a client would get specialized information different than what a field service tech would get at the same location.
- a student of history would get different information at a particular location such as Boston than an architecture student would
- a person learning how to manage meetings more efficiently would get different support than a person working on making better presentations
I’m sure you can see where this is going. It may well be that we can coopt the Gimbal platform for learning as well. We’ve had the capability before, but now it may be much easier by having an SDK available. Writing rules to take advantage of all the sensors is going to be a big chore, ultimately, but if they do the hard yards for their needs, we may be able to ride on the their coattails for ours. It may be an instance when marketing does our work for us!
Mobile really is a game changer, and this is just another facet taking it much further along the digital human augmentation that’s making us much more effective in the moment, and ultimately more capable over time. Maybe even wiser. Think about that.
Emergent & Semantic Learning
The last of the thoughts still percolating in my brain from #mlearncon finally emerged when I sat down to create a diagram to capture my thinking (one way I try to understand things is to write about them, but I also frequently diagram them to help me map the emerging conceptual relationships into spatial relationships).
What I was thinking about was how to distinguish between emergent opportunities for driving learning experiences, and semantic ones. When we built the Intellectricity© system, we had a batch of rules that guided how we were sequencing the content, based upon research on learning (rather than hardwiring paths, which is what we mostly do now). We didn’t prescribe, we recommended, so learners could choose something else, e.g. the next best, or browse to what they wanted. As a consequence, we also could have a machine learning component that would troll the outcomes, and improve the system over time.
And that’s the principle here, where mainstream systems are now capable of doing similar things. What you see here are semantic rules (made up ones), explicitly making recommendations, ideally grounded in what’s empirically demonstrated in research. In places where research doesn’t stipulate, you could also make principled recommendations based upon the best theory. These would recommend objects to be pulled from a pool or cloud of available content.
However, as you track outcomes, e.g. success on practice, and start looking at the results by doing data analytics, you can start trolling for emergent patterns (again, made up). Here we might find confirmation (or the converse!) of the empirical rules, as well as potentially new patterns that we may be able to label semantically, and even perhaps some that would be new. Which helps explain the growing interest in analytics. And, if you’re doing this across massive populations of learners, as is possible across institutions, or with really big organizations, you’re talking the ‘big data’ phenomena that will provide the necessary quantities to start generating lots of these outcomes.
Another possibility is to specifically set up situations where you randomly trial a couple alternatives that are known research questions, and use this data opportunity to conduct your experiments. This way we can advance our learning more quickly using our own hypotheses, while we look for emergent information as well.
Until the new patterns emerge, I recommend adapting on the basis of what we know, but simultaneously you should be trolling for opportunities to answer questions that emerge as you design, and look for emergent patterns as well. We have the capability (ok, so we had it over a decade ago, but now the capability is on tap in mainstream solutions, not just bespoke systems), so now we need the will. This is the benefit of thinking about content as systems – models and architectures – not just as unitary files. Are you ready?