Gary Woodill gave a broad reaching keynote covering the past, present, and future of mobile learning. Peppered with great examples and good thinking, it was an illuminating kickoff to the MobiLearnAsia conference.
Beyond eBooks
Among the things I’ve been doing lately is talking to folks who’ve got content and are thinking about the opportunities beyond books. This is a good thing, but I think it’s time to think even further. Because, frankly, the ebook formats are still too limited.
It’s no longer about the content, it’s about the experience. Just putting your content onto the web or digital devices isn’t a learning solution, it’s an information solution. So I’m suggesting transcending putting your content online for digital, and starting to think about the opportunities to leverage what technology can do. It started with those companion sites, with digital images, videos, audios, and interactives that accompany textbooks, but the opportunities go further.
We can now embed the digital media within ebooks. Why ebooks, not on the web? I think it’s primarily about the ergonomics. I just find it challenging to read on screen. I want to curl up with a book, getting comfortable.
However, we can’t quite do what I want with ebooks. Yes, we can put in richer images, digital audio, and video. The interactives part is still a barrier, however. The ebook standards don’t yet support it, though they could. Apple’s expanded the ePub format with the ability to do quick knowledge checks (e.g. true/false or multiple choice questions). There’s nothing wrong with this, as far as it goes, but I want to go further.
I know a few, and sure that there are more than a few, organizations that are experimenting with a new specification for ePub that supports richer interaction, more specifically pretty much anything you can do with HTML 5. This is cool, and potentially really important.
Let me give you a mental vision of what could be on tap. There’s an app for iOS and Android called Imaginary Range. It’s an interesting hybrid between a graphic novel and a game. You read through several pages of story, and then there’s an embedded game you play that’s tied to, and advances, the story.
Imagine putting that into play for learning: you read a graphic novel that’s about something interesting and/or important, and then there’s a simulation game embedded where you have to practice the skills. While there’s still the problem with a limited interpretation of what’s presented (ala the non-connectionist MOOCs), in well-defined domains these could be rich. Wrapping a dialog capability around the ebook, which is another interesting opportunity, only adds to the learning opportunity.
I’ll admit that I think this is not really mobile in the sense of running on a pocketable, but instead it’s a tablet proposition. Still, I think there’s real value to be found.
Top 10 Tools for Learning
Among the many things my colleague Jane Hart does for our community is to compile the Top 100 Tools for learning each year. I think it’s a very interesting exercise, showing how we ourselves learn, and the fact that it’s been going on for a number of years provides interesting insight. Here are my tools, in no particular order:
WordPress is how I host and write this Learnlets blog, thinking out loud.
Keynote is how I develop and communicate my thinking to audiences (whether I eventually have to port to PPT for webinars or not).
Twitter is how I track what people find interesting.
Facebook is a way to keep in touch with a tighter group of people on broader topics than just learning. I’m not always happy with it, but it works.
Skype is a regular way to communicate with people, using a chat as a backchannel for calls, or keeping open for quick catch ups with colleagues. An open chat window with my ITA colleagues is part of our learning together.
OmniGraffle is the tool I use to diagram, one of the ways I understand and communicate things.
OmniOutliner often is the way I start thinking about presentations and papers.
Google is my search tool.
Word is still the way I write when I need to go industrial-strength, getting the nod over Pages because of it’s outlining and keyboard shortcuts.
GoodReader on the qPad is the way I read and markup documents that I’m asked to review.
That’s 10, so I guess I can’t mention how I’ve been using Graphic Converter to edit images, or GoToMeeting as the most frequent (tho’ by no means the only) web conferencing environment I’ve been asked to use.
I exhort you to also pass on your list to Jane, and look forward to the results.
The Tablet Proposition
RJ Jacquez asks the question “is elearning on tablets really mlearning“. And, of course, the answer is no, elearning on tablets is just elearning, and mlearning is something different. But it got me to thinking about where tablets do fit in the mlearning picture, in ways that go beyond what I’ve said in the past.
I wasn’t going to bother to say why I answered no before I get to the point of my post, but then I noticed that more than half of the respondents say it is, (quelle horreur), so I’ll get that out of the way first. If your mobile solution isn’t doing something unique because of where (or when) you are, if it’s not doing something unique to the context, it’s not mlearning. Using a tablet like a laptop is not mlearning. If you’re using it to solve problems in your location, to access information you need here and now, it’s mobile, whether pocketable or not. That’s what mlearning is, and it’s mostly about performance support, or contextualized learning augmentation, it’s not about just info access in convenience.
Which actually segues nicely into my main point. So let’s ask, when would you want a tablet instead of a pocketable when you’re on the go? I think the answer is pretty clear: when you need more information or interactivity than a pocketable can handle, and you’re not as concerned about space.
Taking the first situation: there are times when a pocketable device just can’t cope with the amount of screen real estate you need. If you need a rich interaction to establish information: numerous related fields or a broad picture of context, you’re going to be hard pressed to use a pocketable device. You can do it if you need to, with some complicated interface design, but if you’ve the leeway, a tablet’s better.
And that leeway is the second point: if it’s not running around from cars to planes, but instead either on a floor you’re traversing in a more leisurely or systematic way, or in a relatively confined space, a tablet is going to work out fine. The obvious places in use are hospitals or airplane cockpits, but this is true of factory floors, restaurants, and more.
There is a caveat: if large amounts of text need to be captured, neither a pocketable nor a tablet are going to be particularly great. Handwriting capture is still problematic, and touchscreen keyboards aren’t industrial strength text entry solutions. Audio capture is a possibility, but the transcription may need editing. So, if it’s keyboard input, use something with a real keyboard: netbook or laptop.
So, that’s my pragmatic take on when tablets take over from pocketables. I take tablets to meetings and when seated for longer periods of time, but it’s packed when I’m hopping from car to plane, on a short shopping trip, etc. It’s about tradeoffs, and your tradeoff, if you’re targeting one device, will be mobility versus information. Well, and text.
The point is to be systematic and strategic about your choice of devices. Opportunism is ok, but unexamined decisions can bite you. Make sense?
HyperCard reflections #hypercard25th
It’s coming up to the 25th anniversary of HyperCard, and I’m reminded of how much that application played a role in my thinking and working at the time. Developed by Bill Atkinson, it was really ‘programming for the masses’, a tool for the Macintosh that allowed folks to easily build simple, and even complex, applications. I’d programmed in other environments : Algol, Pascal, Basic, Forth, and even a little Lisp, but this was a major step forward in simplicity and power.
A colleague of mine who was working at Claris suggested how cool this new tool was going to be, and I taught myself HyperCard while doing a postdoc at the University of Pittsburgh’s Learning Research and Development Center. I used it to prototype my ideas of a learning tool we could use for our research on children’s mental models of science. I then used it to program a game based upon my PhD research, embedding analogical reasoning puzzles into a game (Voodoo Adventure; see screenshot). I wrote it up and got it published as an investigation how games could be used as cognitive research tools. To little attention, back in ’91 :).
While teaching HCI, I had my students use HyperCard to develop their interface solutions to my assignments. The intention was to allow them to focus more on design and less on syntax. I also reflected on how the interface encapsulated to some degree on what Andi diSessa called ‘incremental advantage’, a property of an environment that rewarded greater investments in understanding with greater power to control the system. HyperCard’s buttons, fields, and backgrounds provided this, up until the next step to HyperTalk (which also had that capability once you got into the programming notion). I also proposed that such an environment could support ‘discoverability’ (a concept I learned from Jean Marc Robert), where an environment could support experimentation to learn to use it in steady ways. Another paper resulted.
I also used HyperCard to develop applications in my research. We used it to develop Quest for Independence, a game that helped kids who grew up without parents (e.g. foster care) learn to survive on their own. Similarly, we developed a HCI performance support tool. Both of these later got ported to the web as soon as CGI’s came out that let the web retain state (you can still play Quest; as far as I know it was the first serious game you could play on the web).
The other ways HyperCard were used are well known (e.g. Myst), but it was a powerful tool for me personally, and I still miss having an easy environment for prototyping. I don’t program anymore (I add value other ways), but I still remember it fondly, and would love to have it running on my iPad as well! Kudos to Bill and Apple for creating and releasing it; a shame it was eventually killed through neglect.
mLearning 3.0
Robert Scoble has written about Qualcomm’s announcement of a new level of mobile device awareness. He characterizes the phone transitions from voice (mobile 1.0) to tapping (2.0) to the device knowing what to do (3.0). While I’d characterize it differently, he’s spot on about the importance of this new capability.
I’ve written before about how the missed opportunity is context awareness, specifically not just location but time. What Qualcomm has created is a system that combines location awareness, time awareness, and the ability to build and leverage a rich user profile. Supposedly, according to Robert, it’s also tapped into the accelerometer, altimeter, whatever sensors there are. It’ll be able to know in pretty fine detail a lot more about where you are and doing.
Gimbal is mostly focused on marketing (of course, sigh), but imagine what we could do for learning and performance support!
We can now know who you are and what you’re doing, so:
- a sales team member visiting a client would get specialized information different than what a field service tech would get at the same location.
- a student of history would get different information at a particular location such as Boston than an architecture student would
- a person learning how to manage meetings more efficiently would get different support than a person working on making better presentations
I’m sure you can see where this is going. It may well be that we can coopt the Gimbal platform for learning as well. We’ve had the capability before, but now it may be much easier by having an SDK available. Writing rules to take advantage of all the sensors is going to be a big chore, ultimately, but if they do the hard yards for their needs, we may be able to ride on the their coattails for ours. It may be an instance when marketing does our work for us!
Mobile really is a game changer, and this is just another facet taking it much further along the digital human augmentation that’s making us much more effective in the moment, and ultimately more capable over time. Maybe even wiser. Think about that.
Emergent & Semantic Learning
The last of the thoughts still percolating in my brain from #mlearncon finally emerged when I sat down to create a diagram to capture my thinking (one way I try to understand things is to write about them, but I also frequently diagram them to help me map the emerging conceptual relationships into spatial relationships).
What I was thinking about was how to distinguish between emergent opportunities for driving learning experiences, and semantic ones. When we built the Intellectricity© system, we had a batch of rules that guided how we were sequencing the content, based upon research on learning (rather than hardwiring paths, which is what we mostly do now). We didn’t prescribe, we recommended, so learners could choose something else, e.g. the next best, or browse to what they wanted. As a consequence, we also could have a machine learning component that would troll the outcomes, and improve the system over time.
And that’s the principle here, where mainstream systems are now capable of doing similar things. What you see here are semantic rules (made up ones), explicitly making recommendations, ideally grounded in what’s empirically demonstrated in research. In places where research doesn’t stipulate, you could also make principled recommendations based upon the best theory. These would recommend objects to be pulled from a pool or cloud of available content.
However, as you track outcomes, e.g. success on practice, and start looking at the results by doing data analytics, you can start trolling for emergent patterns (again, made up). Here we might find confirmation (or the converse!) of the empirical rules, as well as potentially new patterns that we may be able to label semantically, and even perhaps some that would be new. Which helps explain the growing interest in analytics. And, if you’re doing this across massive populations of learners, as is possible across institutions, or with really big organizations, you’re talking the ‘big data’ phenomena that will provide the necessary quantities to start generating lots of these outcomes.
Another possibility is to specifically set up situations where you randomly trial a couple alternatives that are known research questions, and use this data opportunity to conduct your experiments. This way we can advance our learning more quickly using our own hypotheses, while we look for emergent information as well.
Until the new patterns emerge, I recommend adapting on the basis of what we know, but simultaneously you should be trolling for opportunities to answer questions that emerge as you design, and look for emergent patterns as well. We have the capability (ok, so we had it over a decade ago, but now the capability is on tap in mainstream solutions, not just bespoke systems), so now we need the will. This is the benefit of thinking about content as systems – models and architectures – not just as unitary files. Are you ready?
5 Phrases to Make Mobile Work
Today I was part of a session at the eLearning Guild’s mLearnCon mlearning conference on Making Mobile Work. For my session I put my tongue slightly in cheek and suggested that there were 5 phrases you need to master to Make mLearning Work. Here they are, for your contemplation.
The first one is focused on addressing either or both of yourself or any other folks who aren’t yet behind the movement to mobile:
How does your mobile device make you smarter?
The point being that there are lots of ways we’re all already using mobile to help us perform. We look up product info while shopping, use calculators to split up the bill, we call folks for information in problem-solving like what to bring home from the grocery store, and we take photos of things we need to remember like hotel room numbers or parking spots. If you aren’t pushing this envelope, you should be. And if folks aren’t recognizing the connection between how they help themselves and what the organization could be doing for employees or customers, you should be helping them.
The second one focuses on looking beyond the initial inference from the phrase “mlearning”:
Anything but a course!
Here we’re trying to help our stakeholders (and designers) think beyond the course and think about performance support, informal learning, collaboration, and more. While it might be about augmenting a course, it’s more likely to be access to information and people, as well as computational support. Mobile learning is really mobile performance support and mobile social.
The third key phrase emphasizes taking a strategic approach:
Where’s the business need?
Here we’re emphasizing the ‘where’ and the ‘business’. What’s important is thinking about meeting real business needs, with metrics and everything. What do the folks who are performing away from their desks need? What small thing could you be doing that would make that activity have a much more positive impact on the bottom line?
The fourth phrase is specifically focused on design:
What’s the least I can do for you?
It’s not about doing everything you can, but instead focusing on the minimal impact to get folks back into the workflow. Mobile is about the 20% of the features that will meet 80% of the need. It’s about the least assistance principle. It’s about elegance and relevance.
From there, we finish by focusing on our providers:
Do you have a mobile solution?
Look, mobile is more than just a tactic, it’s a platform, and you need to recognize it as such. Frankly, if a vendor of an enterprise solution (except, perhaps, for computationally intensive work like 3D rendering and so on) doesn’t have a mobile solution, I reckon it’s a deal-breaker. This is where mobile is really the catalyst for change: it’s bringing a full suite of technology support whenever and wherever needed, so we need to start thinking about what a full suite of support is. What is a full performance ecosystem?
So there you have it, the gist of the presentation. If you master the concepts behind these phrases and employ them judiciously, I do believe you’ll have a better chance of making mlearning work.
Positive Payload Weapons Presentation Mindmap
The other evening I went off to hear an intriguing sounding presentation on Positive Payload Weapons by Margarita Quihuis (who really just introduced the session) and Mark Nelson. As I sometimes do, I mind mapped it.
I have to say it’s an intriguing framework, but it appeared that they’ve not yet really put it into practice. In short, as the diagram in the lower right suggests, weapons have evolved to do more damage at greater range (from knives one on one to atomic bombs across the world). What could we do to evolve doing more good at greater range? From personal kudos to, well, that’s the open question. They cited the Israel-Iran Love Bombs as an example, and the tactical response.
Oh, yeah, the drug part is the serotonin you get from doing positive things (or something like that).
Style-ish
I’m a real fan of styles, ala Microsoft Word. If you don’t get this concept, I wish you would. Let me explain.
The concept is fairly simple. Instead of hand-formatting a document by manually adding in bold, font sizes, italics, indents, extra paragraph returns, you define a paragraph as a ‘style’. That is, you say this paragraph is a heading 1, that paragraph is normal or body text, this other one is a figure, etc. Then you define what a heading one looks like: bold, font size 14, with space before of 6 pts, and space after of 6 pts, etc.
Why use styles? Several reasons. First, I can then use the outline feature to organize my writing, and then automatically have the right headings. Second, if I add in content, I don’t have to hand-reformat the way returns have been used to force page breaks (really). Third, and most importantly, if someone wants there to be a different look and feel to the document, I just change the definition of styles, I don’t have to manually reformat the document.
If you use styles correctly, the document automatically handles things like page breaks and formatting, so the document looks great no matter how you change and edit it. Which is why, when someone sends me a document to edit that is hand formatted, I’ll often redo the whole (darn) thing in styles, just to make my life easier. And grumble, with less than complimentary thoughts about the author.
Now, styles are not just in Microsoft Word, they’re in Pages, Powerpoint, Keynote, and other places where you end up having repeated formats. They may have a different title, but the idea plays a role in templates, or themes, or masters, or other terms, but the concept is about separating out what it says from how it looks, and having the description of how it looks separately editable from what it says. It’s the point behind CSS and XML, but it manifests increasingly in smart content.
And I admit, I’m really good with styles in Word, I’m pretty good in Pages, and still wrestle with Keynote, and I don’t know about other tools like Excel, but I reckon the concept is important enough that it should start showing up everywhere.
Please, please, use styles. At least in anything you send to me ;).