Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

25 January 2012

Will tablets diverge?

Clark @ 6:27 AM

After my post trying to characterize the differences between tablets and mobile, Amit Garg similarly posted that tablets are different. He concludes that “a conscious decision should be made when designing tablet learning (t-learning) solutions”, and goes further to suggest that converting elearning or mlearning directly may not make the most sense.  I agree.

As I’ve suggested, I think the tablet’s not the same as a mobile phone. It’s not always with you, and consequently it’s not ready for any use.  A real mobile device is useful for quick information bursts, not sustained attention to the device.  (I’ll suggest that listening to audio, whether canned or a conversation, isn’t quite the same, the mobile device is a vehicle, not the main source of interaction.)  Tablets are for more sustained interactions, in general. While they can be used for quick interactions, the screen size supports more sustained interactions.

So when do you use tablets?  I believe they’re valuable for regular elearning, certainly.  While you would want to design for the touch screen interface rather than mimic a mouse-driven interaction.  Of course, I believe you also should not replicate the standard garbage elearning, and take advantage of rethinking the learning experience, as Barbara Means suggested in the SRI report for the US Department of Education, finding that eLearning was now superior to F2F.  It’s not because of the medium itself, but because of the chance to redesign the learning.

So I think that tablets like the iPad will be great elearning platforms. Unless the task is inherently desktop, the intimacy of the touchscreen experience is likely to be superior.  (Though more than Apple’s new market move, the books can be stunning, but they’re not a full learning experience.)  But that’s not all.

Desktops, and even laptops don’t have the portability of a tablet. I, and others, find that tablets are taken more places than laptops. Consequently, they’re available for use as performance support in more contexts than laptops (and not as many as smart or app phones).  I think there’ll be a continuum of performance support opportunities, and constraints like quantity of information (I’d rather look at a diagram on a tablet) constraints of time & space in the performance context, as well as preexisting pressures for pods (smartphone or PDA) versus tablets will determine the solution.

I do think there will be times when you can design performance support to run on both pads and pods, and times you can design elearning for both laptop and tablet (and tools will make that easier), but you’ll want to do a performance context analysis as well as your other analyses to determine what makes sense.

 

 

7 September 2016

China is mobile!

Clark @ 8:14 AM

I’ve had the fortune to be here in China speaking on mlearning.  And there are a couple of interesting revelations that I hadn’t really recognized when I did the same last year that I thought I’d share.

For one, while mobile is everywhere like many places, it’s more here.  It seems many people carry more than one phone, for a variety of reasons (one fellow said that he carried another because the battery wouldn’t last all day!).  But they’re all phones, I seem to see few tablets.  They vary in size from phones to phablets, but they’re here.

Which leads to a second recognition.  They are big into mlearning, and elearning. The culture does respect scholarship (no anti-intellectualism here), so they’re quite keen to continue their education. Companies with mlearning courses do well, and the government is investing in educational technology in a big way.  It’s not clear whether their pedagogy is advanced (I can’t read Chinese, I admit), but they do get ‘chunking’ into small bits. And, importantly, the recognition of the value of investment is important.

QuinnovationQRCodeOne other thing struck me as well: QR codes live! They’re everywhere here. They used them during my workshop to run a lottery, and to answer some polling questions on demographics of the audience.  They’re in the restaurants as a start to the payment process. And they’re scattered around on most ads.  They have an advantage that they seem to have mastered the art of having an app that systematically recognizes them (it’s built into the ubiquitous social media app, WeChat).

Establishing the consistent use of a standard can help build a powerful, and valuable, ecosystem.  I can wish that the providers in the US would work and play together a little bit more!  There may be better alternatives, but getting consistently behind one standard makes the investment amortize effectively.

I’m pleased to see that mLearning is taking off, and had fun sharing some of the models that I think provide leverage to rally take advantage.  Here’s to getting going with mobile!

26 June 2015

Personal processing

Clark @ 7:48 AM

I was thinking about a talk on mobile I’m going to be giving, and realized that mobile is really about personal processing. Many of the things you can do at your desktop you can do with your mobile, even a wearable: answering calls, responding to texts.  Ok, so responding to email, looking up information, and more might require the phone for a keyboard (I confess to not being a big Siri user, mea culpa), but it’s still where/when/ever.

So the question then became “what doesn’t make sense on a mobile”. And my thought was that industrial strength processing doesn’t make sense on a mobile.  Processor intensive work: video editing, 3D rendering, things that require either big screens or lots of CPU.  So, for instance, while word processing isn’t really CPU intensive, for some reason mobile word processors don’t seamlessly integrate outlining.  Yet I require outlining for big scale writing, book chapters or whole books. I don’t do 3D or video processing, but that would count too.

One of the major appeals of mobile is having versatile digital capabilities, the rote/complex complement to our pattern-matching brains, (I really wanted to call my mobile book ‘augmenting learning’) with us at all times.  It makes us more effective.  And for many things – all those things we do with mobile such as looking up info, navigating, remembering things, snapping pictures, calculating tips – that’s plenty of screen and processing grunt.  It’s for personal use.

Sure, we’ll get more powerful capabilities (they’re touting multitasking on tablets now), and the boundaries will blur, but I still think there’ll be the things we do when we’re on the go, and the things we’ll stop and be reflective about.  We’ll continue to explore, but I think the things we do on the wrist or in the hand will naturally be different than those we do seated.   Our brains work in active and reflective modes, and our cognitive augment will similarly complement those needs.  We’ll have personal processing, and then we’ll have powerful processing. And that’s a good thing, I think. What think you?

 

21 January 2015

Wearables?

Clark @ 8:22 AM

In a discussion last week, I suggested that the things I was excited about included wearables. Sure enough, someone asked if I’d written anything about it, and I haven’t, much. So here are some initial thoughts.

I admit I was not a Google Glass ‘Explorer’ (and now the program has ended).  While tempted to experiment, I tend not to spend money until I see how the device is really going to make me more productive.  For instance, when the iPad was first announced, I didn’t want one. Between the time it was announced and it was available, however, I figured out how I’d use it produce, not just consume.   I got one the first day it came out.  By the same rationale, I got a Palm Pilot pretty early on, and it made me much more effective.   I haven’t gotten a wrist health band, on the other hand, though I don’t think they’re bad ideas, just not what I need.

The point being that I want to see a clear value proposition before I spend my hard earned money.  So what am I thinking in regards to wearables? What wearables do I mean?  I am talking wrist devices, specifically.  (I  may eventually warm up to glasses as well, when what they can do is more augmented reality than they do now.)  Why wrist devices?  That’s what I’m wrestling with, trying to conceptualize what is a more intuitive assessment.

Part of it, at least, is that it’s with me all the time, but in an unobtrusive way.  It supports a quick flick of the wrist instead of pulling out a whole phone. So it can do that ‘smallest info’ in an easy way. And, more importantly, I think it can bring things to my attention more subtly than can a phone.  I don’t need a loud ringing!

I admit that I’m keen on a more mixed-initiative relationship than I currently have with my technology.  I use my smartphone to get things I need, and it can alert me to things that I’ve indicated I’m interested in, such as events that I want an audio alert for.  And of course, for incoming calls.  But what about for things that my systems come up with on their own?  This is increasingly possible, and again desirable.  Using context, and if a system had some understanding of my goals, it might be able to be proactive. So imagine you’re out and about, and your watch reminds you that while you were  here you wanted to pick up something nearby, and provide the item and location.  Or to prep for that upcoming meeting and provide some minimal but useful info.   Note that this is not what’s currently on offer, largely.  We already have geofencing to do some, but right now for it to happen you largely have to pull out your phone or have it give a largely intrusive noise to be heard from your pocket or purse.

So two things about this: one why the watch and not the phone, and the other, why not the glasses? The watch form factor is, to me, a more accessible interface to serve as a interactive companion. As I suggested, pulling it out of the pocket, turning it on, going through the security check (even just my fingerprint), adds more of an overhead than I necessarily want.  If I can have something less intrusive, even as part of a system and not fully capable on it’s own, that’s OK.  Why not glasses? I guess it’s just that they seem more unnatural.  I am accustomed to having information on my wrist, and while I wear glasses, I want them to be invisible to me.  I would love to have a heads-up display at times, but all the time would seem to get annoying. I’ll stretch and suggest that the empirical result that most folks have stopped wearing them most of the time bears up my story.

Why not a ring, or a pendant, or?  A ring seems to have too small an interface area.  A pendant isn’t easily observable. On my wrist is easy for a glance (hence, watches).  Why not a whole forearm console?  If I need that much interface, I can always pull out my phone.  Or jump to my tablet. Maybe I will eventually will want an iBracer, but I’m not yet convinced. A forearm holster for my iPhone?  Hmmm…maybe too geeky.

So, reflecting on all this, it appears I’m thinking about tradeoffs of utility versus intrusion.  A wrist devices seems to fit a sweet spot in an ecosystem of tech for the quick glance, the pocket access, and then various tradeoffs of size and weight for a real productivity between tablets and laptops.

Of course, the real issue is whether there’s sufficient information available through the watch that it makes a value proposition. Is there enough that’s easy to get to that doesn’t require a phone?  Check the temperature?  Take a (voice) note?  Get a reminder, take a call, check your location? My instinct is that there is.  There are times I’d be happy to not have to take my phone (to the store, to a party) if I could take calls on my wrist, do minimal note taking and checking, and navigating.  For the business perspective, also have performance support whether push or pull.  I don’t see it for courses, but for just-in-time…  And contextual.

This is all just thinking aloud at this point.  I’m contemplating the iWatch but don’t have enough information as of yet.  And I may not feel the benefits outweigh the costs. We’ll see.

21 October 2014

Extending Mobile Models

Clark @ 8:19 AM

In preparation for a presentation, I was reviewing my mobile models. You may recall I started with my 4C‘s model (Content, Compute, Communicate, & Capture), and have mapped that further onto Augmenting Formal, Performance Support, Social, & Contextual.  I’ve refined it as well, separating out contextual and social as different ways of looking at formal and performance support.  And, of course, I’ve elaborated it again, and wonder whether you think this more detailed conceptualization makes sense.

self and social mlearning contentSo, my starting point was realizing that it wasn’t just content.  That is, there’s a difference between compute and content where the interactivity was an important part of the 4C’s, so that the characteristics in the content box weren’t discriminated enough.  So the new two initial sections are mlearning content and mlearning compute, by self or social.  So, we can be getting things for an individual, or it can be something that’s socially generated or socially enabled.

mLearningComputeThe point is that content is prepared media, whether text, audio, or video.  It can be delivered or accessed as needed. Compute, interactive capability, is harder, but potentially more valuable. Here, an individual might actively practice, have mixed initiative dialogs, or even work with others or tools to develop an outcome or update some existing shared resources.

mLearningCaptureThings get more complex when we go beyond these elements.  So I had capture as one thing, and I’m beginning to think it’s two: one is the capture of current context and keeping sharing that for various purposes, and the other is the system using that context to do something unique.

To be clear here, capture is where you use the text insertion, microphone, or camera to catch unique contextual data (or user input).  It could also be other such data, such as a location, time, barometric pressure, temperature, or more. This data, then, is available to review, reflect on, or more.  It can be combinations, of course, e.g. a picture at this time and this location.

mLearningContextualNow, if the system uses this information to do something different than under other circumstances, we’re contextualizing what we do. Whether it’s because of when you are, providing specific information, or where you are, using location characteristics, this is likely to be the most valuable opportunity.   Here I’m thinking alternate reality games or augmented reality (whether it’s voiceover, visual overlays, what have you).

And I think this is device independent, e.g. it could apply to watches or glasses or..as well as phones and tablets.  It means my 4 C’s become: content, compute, capture, and contextualize.  To ponder.

So, this is a more nuanced look at the mobile opportunities, and certainly more complex as well. Does the greater detail provide greater benefit?

 

 

14 April 2014

How do we mLearn?

Clark @ 6:56 AM

As preface, I used to teach interface design.  My passion was still learning technology (and has been since I saw the connection as an undergraduate and designed my own major), but there’re strong links between the two fields in terms of design for humans.  My PhD advisor was a guru of interface design and the thought was “any student of his should be able to teach interface design”.  And so it turned out.  So interface design continues to be an interest of mine, and I recognize the importance. More so on mobile, where there are limitations on interface real estate, so more cleverness may be required.

Stephen Hoober, who I had the pleasure of sharing a stage with at an eLearning Guild conference, is a notable UI design expert with a speciality in mobile.  He had previously conducted a research project examining how people actually hold their phones, as opposed to anecdotes.  The Guild’s Research Director, Patti Schank, obviously thought this interesting enough to extend, because they’ve jointly published the results of the initial report and subsequent research into tablets as well. And the results are important.

The biggest result, for me, is that people tend to use phones while standing and walking, and tablets while sitting.  While you can hold a tablet with two hands and type, it’s hard.  The point is to design for supported use with a tablet,  but for handheld use with a phone. Which actually does imply different design principles.

I note that I still believe tablets to be mobile, as they can be used naturally while standing and walking, as opposed to laptops. Though you can support them, you don’t have to.  (I’m not going to let the fact that there are special harnesses you can buy to hold tablets while you stand, for applications like medical facilities dissuade me, my mind’s made up so don’t confuse me :)

The report goes into more details, about just how people hold it in their hands (one handed w/ thumb, one hand holding, one hand touching, two hands with two thumbs, etc), and the proportion of each.  This has impact on where on the screen you put information and interaction elements.

Another point is the importance of the center for information and the periphery for interaction, yet users are more accurate at the center, so you need to make your periphery targets larger and easier to hit. Seemingly obvious, but somehow obviousness doesn’t seem to hold in too much of design!

There is a wealth of other recommendations scattered throughout the report, with specifics for phones, small and large tablets, etc, as well as major takeaways.  For example the implication from the fact that tablets are often supported means that more consideration of font size is needed than you’d expect!

The report is freely available on the Guild site in the Research Library (under the Content>Research menu).  Just in time for mLearnCon!  

10 April 2014

Can we jumpstart new tech usage?

Clark @ 7:20 AM

It’s a well-known phenomena that new technologies get used in the same ways as old technologies until their new capabilities emerge.  And this is understandable, if a little disappointing.  The question is, can we do better?  I’d certainly like to believe so!  And a conversation on twitter led me to try to make the case.

So, to start with, you have to understand the concept of affordances, at least at a simple level.  The notion is that objects in the world support certain action owing to the innate characteristics of the object (flat horizontal surfaces support placing things on them, levers afford pushing and pulling, etc).  Similarly, interface objects can imply their capabilities (buttons for clicking, sliders for sliding). They can be conveyed by visual similarity to familiar real-world objects, or be completely new (e.g. a cursor).

One of the important concepts is whether the affordance is ‘hidden’ or not.  So, for instance, on iOS you can have meaningful differences between one, two, three, and even four-fingered swipes.  Unless someone tells you about it, however, or you discover it randomly (unlikely), you’re not likely to know it.  And there’re now so many that they’re hard to remember.  There are many deep arguments about affordances, and they’re likely important but they can seem like ‘angels dancing on the head of a pin’ arguments, so I’ll leave it at this.

The point here being that technologies have affordances.  So, for example, email allows you to transmit text communications asynchronously to a set group of recipients.  And the question is, can we anticipate and leverage the properties and skip (or minimize) the stumbling beginnings.

Let me use an example. Remember the Virtual Worlds bubble?  Around 2003, immersive learning environments were emerging (one of my former bosses went to work for a company). And around 2006-2009 they were quite the coming thing, and there was a lot of excitement that they were going to be the solution.  Everyone would be using them to conduct business, and folks would work from desktops connecting to everyone else.  Let me ask: where are they now?

The Gartner Hype Cycle talks about the ‘Peak of Inflated Expectations’ and then the ‘Trough of Disillusionment’, followed by the ‘Slope of Enlightenment’ until you reach the ‘Plateau of Productivity’ (such vibrant language!).  And what I want to suggest is that the slope up is where we realize the real meaningful affordances that the technology provides.

So I tried to document the affordances and figure out what the core capabilities were.  It seemed that Virtual Worlds really supported two main points: being inherently 3D and being social.  Which are important components, no argument. On the other hand, they had two types of overhead, the cognitive load of learning them, and the technological load of supporting them. Which means that their natural niche would be where 3D would be inherently valuable (e.g. spatial models or settings, such as refineries where you wanted track flows), and where social would also be critical (e.g. mentoring).  Otherwise there were lower-cost ways to do either one alone.

Thus, my prediction would be that those would be the types of applications that’d be seen after the bubble burst and we’d traversed the trough.  And, as far as I know, I got it right.  Similarly, with mobile, I tried to find the core opportunities.  And this led to the models in the Designing mLearning book.

Of course, there’s a catch.  I note that my understanding of the capabilities of tablets has evolved, for instance. Heck, if I could accurately predict all the capabilities and uses of a technology, I would be running venture capital.  That said, I think that I can, and more importantly, we can, make a good initial stab.  Sure, we’ll miss some things (I’m not sure I could’ve predicted the boon that Twitter has become), but I think we can do better than we have.  That’s my claim, and I’m sticking to it (until proved wrong, at least ;).

8 April 2014

The death of the PDA greatly exaggerated?

Clark @ 8:29 AM

A colleague wondered if the image on the cover of the new book was a PDA, and my initial response was that the convergence of capabilities suggested the demise of the PDA. But then I had a rethink…

For what is a PDA?  It’s a digital platform sans the capability of a cellular voice channel.  My daughter got an iPod touch, but within a year we needed to get her a new phone, and it’s an iPhone.   Which suggests that a device without phone capability is increasingly less feasible.

But wait a minute, there are plenty of digital devices sans voice. In fact, I have one.  It’s a tablet!  It may have cellular data, but it certainly doesn’t have voice.  And while people are suggesting that the tablet is done, I’m not interested in a phablet, as I already have a problem with a phone in my pocket (putting me in the fashion faux pas category of liking a holster), and I think others want something smaller that they can have all the time.

So, I’ve argued elsewhere that mobile devices have to be handheld, and that tablets have usage patterns different than pocketables.  But I think in many instances tablets do function as personal digital assistants, when you’re not constrained by space.  There are advantages to the larger screen. So, while I think the pocketable version of the PDA is gone (since having a phone and a PDA seems redundant), the non-phone digital assistant is going to persist for the larger form factor.  What am I missing?

8 January 2014

Making Mobile Mayhem

Clark @ 6:10 AM

As I suggested in my post on directions for the year, I intend to be stirring up a bit of trouble here and there.  On a less formal basis, I want to suggest that another area where we need a little more light and a little less heat (and smoke) is mobile.  There is huge opportunity here, and I am afraid we are squandering it.

We’re doing a lot wrong when it comes to mobile.  As Jason Haag has aptly put it, elearning courses on a phone (or tablet) is mobile elearning, not mobile learning (aka mlearning).  And while there’s an argument for mobile elearning (at least on tablets), and strong case for augmenting formal learning with mobile (regardless of device), mobile elearning is not mlearning’s natural niche.

mLearning’s natural niche is performance support, whether through content (interactive or not), or social.   Think about how you use your phone? When I ask this of attendees, they’re using them to get information in the moment, or find their way, or capture information.  They’re not using them to take courses!

So we need to be thinking outside the course.  To help, we need case studies, across business sectors, and across the areas.  Which means we need people to be getting their hands on development tools.

Which is a second problem: the tools that are easiest to use are being used to create courses.  The elearning tools we use are increasingly having mobile output, but it’s too easy to then just output courses.  It turns out one of the phenomena that characterize our brains is ‘functional fixedness’, we use a tool in the way we’ve used it before.  Yet we can use these tools to do other things. And there are tools more oriented towards performance support.  Anything that creates content or interactivity can be used to build performance support, but we have to be doing it!

There’s more that we need to be doing in the background – content, governance, strategy – but we need to get our minds around mobile solutions to contextual needs, and start delivering the resources people need.  Mobile is big; the devices are out there, and they’re a platform for so much; we need to capitalize.

The place where you’re going to be able to see the case studies and explore the tools and start getting your mind around mobile will be this summer’s mLearnCon (in San Diego in June!). And you really should be going. Also, if you are doing mobile, you really should be submitting to present.  We need more examples, more ideas, more experience!  (If you need help writing a proposal, I’ve already written a guide.)

Really, presenting is a great contribution to yourself and the industry, and we really could use it.  Help us make mobile mayhem by showing the way.  Or, of course, join us at the conference to get ready to mix it up.  Hope to see you there.

12 November 2013

Thinking Context in the Design Process

Clark @ 6:28 AM

I was talking to the ADL Mobile folks about mobile design processes, and as usual I was going on about how mobile is not the sweet spot for courses (augmenting yes, full delivery no).  When tablets are acting more like a laptop, sure, but otherwise.  I had suggested that the real mobile opportunities are using sensors to do contextual things, and and I also opined that we really don’t have an instructional design model that adequately addresses taking context into account.  I started riffing on what that might involve, and then continued it on a recent trip to speak in Minneapolis.

ContextualDesignNaturally, I started diagramming. I am thinking specifically of augmenting formal learning here, not performance support.  In this diagram, when it’s not a course you head off to consider contextual performance support, if indeed the context of performance is away from a computer.

When, however, it is a course, and you start embedding the key decisions into a setting, the first thing you might want to do is use their existing context (or contexts, it occurs to me now).  Then we can wrap learning around where (or when) they are, turning that life event into a learning experience.  Assuming, of course, we can detect and deliver things based upon context, but that’s increasingly doable.

Now if you can’t use their context, because it’s arguably not something that is located in their existing lives, we want to create a context (this is, really, the essence of serious game design). It might be fantastic (some conspiracy theory, say) or very real (e.g. the Red 7 sales demo, warning: large PDF), but it’s a setting in which the decisions are meaningfully embedded (that is, real application of the model and of interest to the learner).  It might be desktop but if possible, could we distribute the experience into the learners’ world, e.g. transmedia?  Here we’re beginning to talk Alternate Reality Game.  (And we use exaggeration to ramp up the motivation.)

As an aside, I wondered when/how collaboration would fit in here, and I don’t yet have an answer: before setting, after, or in parallel?  Regardless, that’s definitely another consideration, which may be driven by a variety of factors such as whether there are benefits to role-play or collaboration in this particular instance.

This is still very preliminary (thinking and learning out loud), but it has some initial resonance to me.  For you too, or where am I going off track?

Next Page »

Powered by WordPress