Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

26 June 2015

Personal processing

Clark @ 7:48 am

I was thinking about a talk on mobile I’m going to be giving, and realized that mobile is really about personal processing. Many of the things you can do at your desktop you can do with your mobile, even a wearable: answering calls, responding to texts.  Ok, so responding to email, looking up information, and more might require the phone for a keyboard (I confess to not being a big Siri user, mea culpa), but it’s still where/when/ever.

So the question then became “what doesn’t make sense on a mobile”. And my thought was that industrial strength processing doesn’t make sense on a mobile.  Processor intensive work: video editing, 3D rendering, things that require either big screens or lots of CPU.  So, for instance, while word processing isn’t really CPU intensive, for some reason mobile word processors don’t seamlessly integrate outlining.  Yet I require outlining for big scale writing, book chapters or whole books. I don’t do 3D or video processing, but that would count too.

One of the major appeals of mobile is having versatile digital capabilities, the rote/complex complement to our pattern-matching brains, (I really wanted to call my mobile book ‘augmenting learning’) with us at all times.  It makes us more effective.  And for many things – all those things we do with mobile such as looking up info, navigating, remembering things, snapping pictures, calculating tips – that’s plenty of screen and processing grunt.  It’s for personal use.

Sure, we’ll get more powerful capabilities (they’re touting multitasking on tablets now), and the boundaries will blur, but I still think there’ll be the things we do when we’re on the go, and the things we’ll stop and be reflective about.  We’ll continue to explore, but I think the things we do on the wrist or in the hand will naturally be different than those we do seated.   Our brains work in active and reflective modes, and our cognitive augment will similarly complement those needs.  We’ll have personal processing, and then we’ll have powerful processing. And that’s a good thing, I think. What think you?

 

23 April 2015

Personal Mobile Mastery

Clark @ 8:29 am

A conversation with a colleague prompted a reflection.  The topic was personal learning, and in looking for my intersections (beyond my love of meta-learning), I looked at my books. The Revolution isn’t an obvious match, nor is games (though trust me, I could make them work ;), but a more obvious match was mlearning. So the question is, how do we do personal knowledge mastery with mobile?

Let’s get the obvious out of the way. Most of what you do on the desktop, particularly social networking, is doable on a mobile device.  And you can use search engines and reference tools just the same. You can find how to videos as well. Is there more?

First, of course, are all the things to make yourself more ‘effective’.  Using the four key original apps on the Palm Pilot for instance: your calendar to remind you of events or to check availability, using ToDo checklists to remember commitments to do something, using memos to take notes for reference, and using your contact list to reach people.  Which isn’t really learning, but it’s valuable to learn to be good at these.

Then we start doing things because of where you are.  Navigation to somewhere or finding what’s around you are the obvious choices. Those are things you won’t necessarily learn from, but they make you more effective.  But they can also help educate you. You can look where you are on a map and see what’s around you, or identify the thing on the map that’s in that direction (“oh, that’s the Quinnsitute” or “There’s Mount Clark” or whatever), and have a chance of identifying a seen prominence.

And you can use those social media tools as before, but you can also use them because of where or when you are. You can snap pictures of something and send it around and ask how it could help you. Of course, you can snap pictures or films for later recollection and reflection, and contribute them to a blog post for reflection.  And take notes by text or audio. Or even sketching or diagramming. The notes people take for themselves at conferences, for instance, get shared and are valuable not just for the sharer, but for all attendees.

Certainly searching things you don’t understand or, when there’s unknown language, seeing if you can get a translation, are also options.  You can learn what something means, and avoid making mistakes.

When you are, e.g. based upon what you’re doing, is a little less developed.  You’d have to have rich tagging around your calendar to signal what it is you’re doing for a system to be able to leverage that information, but I reckon we can get there if and when we want.

I’m not a big fan of ‘learning’ on a mobile device, maybe a tablet in transit or something, but not courses on a phone.  On the other hand, I am a big fan of self-learning on a phone, using your phone to make you smarter. These are embryonic thoughts, so I welcome feedback.   Being more contextually aware both in the moment and over time is a worthwhile opportunity, one we can and should look to advance.  I think there’s  much yet, though tools like ARIS are going to help change that. And that’ll be good.

 

15 April 2015

Cyborg Thinking: Cognition, Context, and Complementation

Clark @ 8:25 am

I’m writing a chapter about mobile trends, and one of the things I’m concluding with are the different ways we need to think to take advantage of mobile. The first one emerged as I wrote and kind of surprised me, but I think there’s merit.

The notion is one I’ve talked about before, about how what our brains do well, and what mobile devices do well, are complementary. That is, our brains are powerful pattern matchers, but have a hard time remembering rote information, particularly arbitrary or complicated details.  Digital technology is the exact opposite. So, that complementation whenever or wherever we are is quite valuable.

Consider chess.  When first computers played against humans,  they didn’t do well.  As computers became more powerful, however, they finally beat the world champion. However, they didn’t do it like humans do, they did it by very different means; they couldn’t evaluate well, but they could calculate much deeper in the amount of turns played and use simple heuristics to determine whether those were good plays.  The sheer computational ability eventually trumped the familiar pattern approach.  Now, however, they have a new type of competition, where a person and a computer will team and play against another similar team. The interesting result is not the best chess player, nor the best computer program, but a player who knows best how to leverage a chess companion.

Now map this to mobile: we want to design the best complement for our cognition. We want to end up having the best cyborg synergy, where our solution does the best job of leaving to the system what it does well, and leaving to the person the things we do well. It’s maybe only a slight shift in perspective, but it is a different view than designing to be, say, easy to use. The point is to have the best partnership available.

This isn’t just true for mobile, of course, it should be the goal of all digital design.  The specific capability of mobile, using sensors to do things because of when and where we are, though, adds unique opportunities, and that has to figure into thinking as well.  As does, of course, a focus on minimalism, and thinking about content in a new way: not as a medium for presentation, but as a medium for augmentation: to complement the world, not subsume it.

It’s my thinking that this focus on augmenting our cognition and our context with content that’s complementary is the way to optimize the uses of mobile. What’s your thinking?

14 April 2015

Defining Microlearning?

Clark @ 8:32 am

Last week on the #chat2lrn twitter chat, the topic was microlearning. It was apparently prompted by this post by Tom Spiglanin which does a pretty good job of defining it, but some conceptual confusion showed up in the chat that makes it clear there’s some work to be done.  I reckon there may be a role for the label and even the concept, but I wanted to take a stab at what it is and isn’t, at least on principle.

So the big point to me is the word ‘learning’.  A number of people opined about accessing a how-to video, and let’s be clear: learning doesn’t have to come from that.   You could follow the steps and get the job done and yet need to access it again if you ever needed it. Just like I can look up the specs on the resolution of my computer screen, use that information, but have to look it up again next time.  So it could be just performance support, and that’s a good thing, but it’s not learning.  It suits the notion of micro content, but again, it’s about getting the job done, not developing new skills.

Another interpretation was little bits of components of learning (examples, practice) delivered over time. That is learning, but it’s not microlearning. It’s distributed learning, but the overall learning experience is macro (and much more effective than the massed, event, model).  Again, a good thing, but not (to me) microlearning.  This is what Will Thalheimer calls subscription learning.

So, then, if these aren’t microlearning, what is?  To me, microlearning has to be a small but complete learning experience, and this is non-trivial.  To be a full learning experience, this requires a model, examples, and practice.  This could work with very small learnings (I use an example of media roles in my mobile design workshops).  I think there’s a better model, however.

To explain, let me digress. When we create formal learning, we typically take learners away from their workplace (physically or virtually), and then create contextualized practice. That is, we may present concepts and examples (pre- via blended, ideally, or less effectively in the learning event), and then we create practice scenarios. This is hard work. Another alternative is more efficient.

Here, we layer the learning on top of the work learners are already doing.  Now, why isn’t this performance support? Because we’re not just helping them get the job done, we’re explicitly turning this into a learning event by not only scaffolding the performance, but layering on a minimal amount of conceptual material that links what they’re doing to a model. We (should) do this in examples and feedback on practice, now we can do it around real work. We can because (via mobile or instrumented systems) we know where they are and what they’re doing, and we can build content to do this.  It’s always been a promise of performance support systems that they could do learning on top of helping the outcome, but it’s as yet seldom seen.

And the focus on minimalism is good, too.  We overwrite and overproduce, adding in lots that’s not essential.  C.f. Carroll’s Nurnberg Funnel or Moore’s Action Mapping.  And even for non-mobile, minimalism makes sense (as I tout under the banner of the Least Assistance Principle).  That is, it’s really not rude to ask people (or yourself as a designer) “what’s the least I can do for you?”  Because that’s what people generally really prefer: give me the answer and let me get back to work!

Microlearning as a phrase has probably become current (he says, cynically) because elearning providers are touting it to sell the ability of their tools to now deliver to mobile.   But it can also be a watch word to emphasize thinking about performance support, learning ‘in context’, and minimalism.  So I think we may want to continue to use it, but I suggest it’s worthwhile to be very clear what we mean by it. It’s not courses on a phone (mobile elearning), and it’s not spaced out learning, it’s small but useful full learning experiences that can fit by size of objective or context ‘in the moment’.  At least, that’s my take; what’s yours?

17 March 2015

Making Sense of Research

Clark @ 7:37 am

A couple of weeks ago, I was riffing on sensors: how mobile devices are getting equipped with all sorts of new sensors and the potential for more and what they might bring.  As part of that discussion was a brief mention of sensor nets, how aggregating all this data could be of interest too. And low and behold, a massive example was revealed last week.

The context was the ‘spring forward’ event Apple held where they announced their new products.  The most anticipated one was the Apple Watch (which was part of the driving behind my post on wearables), the new iConnected device for your wrist.  The second major announcement was their new Macbook, a phenomenally thin new laptop with some amazing specs on weight and screen display, as well as some challenging tradeoffs.

One announcement that was less noticed was the announcement of a new research endeavor, but I wonder if it isn’t the most game-changing element of them all.  The announcement was ResearchKit, and it’s about sensor nets.

So, smartphones have lots of sensors.  And the watch will have more.  They can already track a number of parameters about you automatically, such as your walking.  There can be more, with apps that can ask about your eating, weight, or other health measurements.  As I pointed out, aggregating data from sensors could do things like identify traffic jams (Google Maps already does this), or collect data like restaurant ratings.

What Apple has done is to focus specifically on health data from their HealthKit, and partner with research hospitals. What they’re saying to scientists is “we’ll give you anonymized health data, you put it to good use”. A number of research centers are on board, and already collecting data about asthma and more.  The possibility is to use analytics that combine the power of large numbers with a bunch of other descriptive data to be able to investigate things at scale.  In general, research like this is hard since it’s hard to get large numbers of subjects, but large numbers of subjects is a much better basis for study (for example, the China-Cornell-Oxford Project that was able to look at a vast breadth of diet to make innovative insights into nutrition and health).

And this could be just the beginning: collecting data en masse (while successfully addressing privacy concerns) can be a source of great insight if it’d done right.  Having devices that are with you and capable of capturing a variety of information gives the opportunity to mine that data for expected, and unexpected, outcomes.

A new iDevice is always cool, and while it’s not the first smart watch (nor was the iPhone the first smartphone, the iPad not the first tablet, nor the iPod the first music play), Apple has a way of making the experience compelling.  Like with the iPad, I haven’t yet seen the personal value proposition, so I’m on the fence.  But the ability to collect data in a massive way that could support ground-breaking insights and innovations in medicine? That has the potential for affecting millions of people around the world.  Now that is impact.

3 March 2015

On the road again

Clark @ 7:42 am

Well, some more travels are imminent, so I thought I’d update you on where the Quinnovation road show would be on tour this spring:

  • March 9-10 I’ll be collaborating with Sarah Gilbert and Nick Floro to deliver ATD’s mLearnNow event in Miami on mobile
  • On the 11th I’ll be at a private event talking the Revolution to a select group outside Denver
  • Come the 18th I’ll be inciting the revolution at the ATD Golden Gate chapter meeting here in the Bay Area
  • On the 25th-27th, I’ll be in Orlando again instigating at the eLearning Guild’s Learning Solutions conference
  • May 7-8 I’ll be kicking up my heels about the revolution for the eLearning Symposium in Austin
  • I’ll be stumping the revolution at another vendor event in Las Vegas 12-13
  • And June 2-3 I’ll be myth-smashing for ATD Atlanta, and then workshopping game design

So, if you’re at one of these, do come up and introduce yourself and say hello!

 

 

25 February 2015

mLearning more than mobile elearning?

Clark @ 6:17 am

Someone tweeted about their mobile learning credo, and mentioned the typical ‘mlearning is elearning, extended’ view. Which I rejected, as I believe mlearning is much more (and so should elearning be).  And then I thought about it some more.  So I’ll lay out my thinking, and see what you think.

I have been touting that mLearning could and should be focused, as should P&D, on anything that helps us achieve our goals better. Mobile, paper, computers, voodoo, whatever technology works.  Certainly in organizations.  And this yields some interesting implications.

So, for instance, this would include performance support and social networks.  Anything that requires understanding how people work and learn would be fair game. I was worried about whether that fit some operational aspects like IT and manufacturing processes, but I think I’ve got that sorted.  UI folks would work on external products, and any internal software development, but around that, helping folks use tools and processes belongs to those of us who facilitate organizational performance and development.  So we, and mlearning, are about any of those uses.

But the person, despite seeming to come from an vendor to orgs, not schools, could be talking about schools instead, and I wondered whether mLearning for schools, definitionally, really is about only supporting learning.  And I can see the case for that; that mlearning in education is about using mobile to help people learn, not perform.  It’s about collaboration, for sure, and tools to assist.

Note I’m not making the case for schools as they are, a curriculum rethink definitely needs to accompany using technology in schools in many ways.  Koreen Pagano wrote this nice post separating Common Core teaching versus assessment, which goes along with my beliefs about the value of problem solving.  And I also laud Roger Schank‘s views, such as the value (or not) of the binomial theorem as a classic example.

But then, mobile should be a tool in learning, so it can work as a channel for content, but also for communication, and capture, and compute (e.g. the 4C’s of mlearning).  And the emergent capability of contextual support (the 5th C, e.g. combinations of the first four).  So this view would argue that mlearning can be used for performance support in accomplishing a meaningful task that’s part of an learning experience.

That would take me back to mlearning being more than just mobile elearning, as Jason Haag has aptly separated.  Sure, mobile elearning can be a subset of mlearning, but not the whole picture. Does this make sense to you?

24 February 2015

Making ‘sense’

Clark @ 8:19 am

I recently wrote about wearables, where I focused on form factor and information channels.  An article I recently read talked about a guy who builds spy gear, and near the end he talked about some things that started me thinking about an extension of that for all mobile, not just wearables.  The topic is  sensors.

In the article, he talks about how, in the future, glasses could detect whether you’ve been around bomb-making materials:

“You can literally see residue on someone if your glasses emit a dozen different wavelengths of microlasers that illuminate clothing in real time and give off a signature of what was absorbed or reflected.”

That’s pretty amazing, chemical spectrometry on the fly.  He goes on to talk about distance vision:

“Imagine you have a pair of glasses, and you can just look at a building 50 feet away, 100 feet away, and look right through the building and see someone moving around.”

 Now, you might nor might not like what he’s doing with that, but imagine applying it elsewhere: identifying where people are for rescue, or identifying materials for quality control.

Heck, I’d find it interesting just to augment the camera with infrared and ultraviolet: imagine being able to use the camera on your phone or glasses to see what’s happening at night, e.g. wildlife (tracking coyotes or raccoons, and managing to avoid skunks!).  Night vision, and seeing things that fluoresce under UV would both be really cool additions.

I’d be interested too in having them able to work to enlarge as well, bring small things to light like a magnifying glass or microscope.

It made me think about all the senses we could augment. I was thinking about walking our dogs, and how their olfactory life is much richer than ours.  They are clearly sensing things beyond our olfactory capabilities, and it would be interesting to have some microscent detectors that could track faint traces to track animals (or know which owner is not adequately controlling a dog, ahem).  They could potentially serve as smoke or carbon monoxide detectors also.

Similarly, auditory enhancement: could we hear things fainter than our ears detect, or have them serve as a stethoscope?  Could we detect far off cries for help that our ears can’t? Of course, that could be misused, too, to eavesdrop on conversations.  Interesting ethical issues come in.

And we’ve already heard about the potential to measure one’s movement, blood pressure, pulse, temperature, and maybe even blood sugar, to track one’s health.  The fit bands are getting smarter and more capable.

There is the possibility for other things we personally can’t directly track: measuring ambient temperatures quantitatively, and air pressure are both already possible and in some devices.  The thermometer could be a health and weather guide, and a barometer/altimeter would be valuable for hiking in addition to weather.

The combination of reporting these could be valuable too.  Sensor nets, where the data from many micro sensors are aggregated have interesting possibilities. Either with known combinations, such as aggregating temperature and air pressure  help with weather, or machine learning  where for example we include sensitive motion detectors,  and might be able to learn to predict earthquakes like supposedly animals can.  Sounds too could be used to triangulate on cries for help, and material detectors could help locate sources of pollution.

We’ve done amazing things with technology, and sensors are both shrinking and getting more powerful. Imagine having sensors scattered about your body in various wearables and integrating that data in known ways, and agreeing for anonymous aggregation for data mining.  Yes, there are concerns, but benefits too.

We can put these together in interesting ways, notifications of things we should pay attention to, or just curiosity to observe things our natural senses can’t detect.  We can open up the world in powerful ways to support being more informed and more productive.  It’s up to us to harness it in worthwhile ways.

21 January 2015

Wearables?

Clark @ 8:22 am

In a discussion last week, I suggested that the things I was excited about included wearables. Sure enough, someone asked if I’d written anything about it, and I haven’t, much. So here are some initial thoughts.

I admit I was not a Google Glass ‘Explorer’ (and now the program has ended).  While tempted to experiment, I tend not to spend money until I see how the device is really going to make me more productive.  For instance, when the iPad was first announced, I didn’t want one. Between the time it was announced and it was available, however, I figured out how I’d use it produce, not just consume.   I got one the first day it came out.  By the same rationale, I got a Palm Pilot pretty early on, and it made me much more effective.   I haven’t gotten a wrist health band, on the other hand, though I don’t think they’re bad ideas, just not what I need.

The point being that I want to see a clear value proposition before I spend my hard earned money.  So what am I thinking in regards to wearables? What wearables do I mean?  I am talking wrist devices, specifically.  (I  may eventually warm up to glasses as well, when what they can do is more augmented reality than they do now.)  Why wrist devices?  That’s what I’m wrestling with, trying to conceptualize what is a more intuitive assessment.

Part of it, at least, is that it’s with me all the time, but in an unobtrusive way.  It supports a quick flick of the wrist instead of pulling out a whole phone. So it can do that ‘smallest info’ in an easy way. And, more importantly, I think it can bring things to my attention more subtly than can a phone.  I don’t need a loud ringing!

I admit that I’m keen on a more mixed-initiative relationship than I currently have with my technology.  I use my smartphone to get things I need, and it can alert me to things that I’ve indicated I’m interested in, such as events that I want an audio alert for.  And of course, for incoming calls.  But what about for things that my systems come up with on their own?  This is increasingly possible, and again desirable.  Using context, and if a system had some understanding of my goals, it might be able to be proactive. So imagine you’re out and about, and your watch reminds you that while you were  here you wanted to pick up something nearby, and provide the item and location.  Or to prep for that upcoming meeting and provide some minimal but useful info.   Note that this is not what’s currently on offer, largely.  We already have geofencing to do some, but right now for it to happen you largely have to pull out your phone or have it give a largely intrusive noise to be heard from your pocket or purse.

So two things about this: one why the watch and not the phone, and the other, why not the glasses? The watch form factor is, to me, a more accessible interface to serve as a interactive companion. As I suggested, pulling it out of the pocket, turning it on, going through the security check (even just my fingerprint), adds more of an overhead than I necessarily want.  If I can have something less intrusive, even as part of a system and not fully capable on it’s own, that’s OK.  Why not glasses? I guess it’s just that they seem more unnatural.  I am accustomed to having information on my wrist, and while I wear glasses, I want them to be invisible to me.  I would love to have a heads-up display at times, but all the time would seem to get annoying. I’ll stretch and suggest that the empirical result that most folks have stopped wearing them most of the time bears up my story.

Why not a ring, or a pendant, or?  A ring seems to have too small an interface area.  A pendant isn’t easily observable. On my wrist is easy for a glance (hence, watches).  Why not a whole forearm console?  If I need that much interface, I can always pull out my phone.  Or jump to my tablet. Maybe I will eventually will want an iBracer, but I’m not yet convinced. A forearm holster for my iPhone?  Hmmm…maybe too geeky.

So, reflecting on all this, it appears I’m thinking about tradeoffs of utility versus intrusion.  A wrist devices seems to fit a sweet spot in an ecosystem of tech for the quick glance, the pocket access, and then various tradeoffs of size and weight for a real productivity between tablets and laptops.

Of course, the real issue is whether there’s sufficient information available through the watch that it makes a value proposition. Is there enough that’s easy to get to that doesn’t require a phone?  Check the temperature?  Take a (voice) note?  Get a reminder, take a call, check your location? My instinct is that there is.  There are times I’d be happy to not have to take my phone (to the store, to a party) if I could take calls on my wrist, do minimal note taking and checking, and navigating.  For the business perspective, also have performance support whether push or pull.  I don’t see it for courses, but for just-in-time…  And contextual.

This is all just thinking aloud at this point.  I’m contemplating the iWatch but don’t have enough information as of yet.  And I may not feel the benefits outweigh the costs. We’ll see.

5 November 2014

#DevLearn 14 Reflections

Clark @ 9:57 am

This past week I was at the always great DevLearn conference, the biggest and arguably best yet.  There were some hiccups in my attendance, as several blocks of time were taken up with various commitments both work and personal, so for instance I didn’t really get a chance to peruse the expo at all.  Yet I attended keynotes and sessions, as well as presenting, and hobnobbed with folks both familiar and new.

The keynotes were arguably even better than before, and a high bar had already been set.

Neil deGrasse Tyson was eloquent and passionate about the need for science and the lack of match between school and life.    I had a quibble about his statement that doing math teaches problem-solving, as it takes the right type of problems (and Common Core is a step in the right direction) and it takes explicit scaffolding.  Still, his message was powerful and well-communicated. He also made an unexpected connection between Women’s Liberation and the decline of school quality that I hadn’t considered.

Beau Lotto also spoke, linking how our past experience alters our perception to necessary changes in learning.  While I was familiar with the beginning point of perception (a fundamental part of cognitive science, my doctoral field), he took it in very interesting and useful direction in an engaging and inspiring way.  His take-home message: teach not how to see but how to look, was succinct and apt.

Finally, Belinda Parmar took on the challenge of women in technology, and documented how small changes can make a big difference. Given the madness of #gamergate, the discussion was a useful reminder of inequity in many fields and for many.  She left lots of time to have a meaningful discussion about the issues, a nice touch.

Owing to the commitments both personal and speaking, I didn’t get to see many sessions. I had the usual situation of  good ones, and a not-so-good one (though I admit my criteria is kind of high).  I like that the Guild balances known speakers and topics with taking some chances on both.  I also note that most of the known speakers are those folks I respect that continue to think ahead and bring new perspectives, even if in a track representing their work.  As a consequence, the overall quality is always very high.

And the associated events continue to improve.  The DemoFest was almost too big this year, so many examples that it’s hard to start looking at them as you want to be fair and see all but it’s just too monumental. Of course, the Guild had a guide that grouped them, so you could drill down into the ones you wanted to see.  The expo reception was a success as well, and the various snack breaks suited the opportunity to mingle.  I kept missing the ice cream, but perhaps that’s for the best.

I was pleased to have the biggest turnout yet for a workshop, and take the interest in elearning strategy as an indicator that the revolution is taking hold.  The attendees were faced with the breadth of things to consider across advanced ID, performance support, eCommunity, backend integration, decoupled delivery, and then were led through the process of identifying elements and steps in the strategy.  The informal feedback was that, while daunted by the scope, they were excited by the potential and recognizing the need to begin.  The fact that the Guild is holding the Learning Ecosystem conference and their release of a new and quite good white paper by Marc Rosenberg and Steve Foreman are further evidence that awareness is growing.   Marc and Steve carve up the world a little differently than I do, but we say similar things about what’s important.

I am also pleased that Mobile interest continues to grow, as evidenced by the large audience at our mobile panel, where I was joined by other mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell.  They provide nicely differing viewpoints, with Sarah representing the irreverent designer, Robert the pragmatic systems perspective, and Chad the advanced technology view, to complement my more conceptual approach.  We largely agree, but represent different ways of communicating and thinking about the topic. (Sarah and I will be joined by Nick Floro for ATD’s mLearnNow event in New Orleans next week).

I also talked about trying to change the pedagogy of elearning in the Wadhwani Foundation, the approach we’re taking and the challenges we face.  The goal I’m involved in is job skilling, and consequently there’s a real need and a real opportunity.  What I’m fighting for is to make meaningful practice as a way to achieve real outcomes.  We have some positive steps and some missteps, but I think we have the chance to have a real impact. It’s a work in progress, and fingers crossed.

So what did I learn?  The good news is that the audience is getting smarter, wanting more depth in their approaches and breadth in what they address. The bad news appears to be that the view of ‘information dump & knowledge test = learning’ is still all too prevalent. We’re making progress, but too slowly (ok, so perhaps patience isn’t my strong suit ;).  If you haven’t, please do check out the Serious eLearning Manifesto to get some guidance about what I’m talking about (with my colleagues Michael Allen, Julie Dirksen, and Will Thalheimer).  And now there’s an app for that!

If you want to get your mind around the forefront of learning technology, at least in the organizational space, DevLearn is the place to be.

 

Next Page »

Powered by WordPress