Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

3 March 2015

On the road again

Clark @ 7:42 am

Well, some more travels are imminent, so I thought I’d update you on where the Quinnovation road show would be on tour this spring:

  • March 9-10 I’ll be collaborating with Sarah Gilbert and Nick Floro to deliver ATD’s mLearnNow event in Miami on mobile
  • On the 11th I’ll be at a private event talking the Revolution to a select group outside Denver
  • Come the 18th I’ll be inciting the revolution at the ATD Golden Gate chapter meeting here in the Bay Area
  • On the 25th-27th, I’ll be in Orlando again instigating at the eLearning Guild’s Learning Solutions conference
  • May 7-8 I’ll be kicking up my heels about the revolution for the eLearning Symposium in Austin
  • I’ll be stumping the revolution at another vendor event in Las Vegas 12-13
  • And June 2-3 I’ll be myth-smashing for ATD Atlanta, and then workshopping game design

So, if you’re at one of these, do come up and introduce yourself and say hello!

 

 

25 February 2015

mLearning more than mobile elearning?

Clark @ 6:17 am

Someone tweeted about their mobile learning credo, and mentioned the typical ‘mlearning is elearning, extended’ view. Which I rejected, as I believe mlearning is much more (and so should elearning be).  And then I thought about it some more.  So I’ll lay out my thinking, and see what you think.

I have been touting that mLearning could and should be focused, as should P&D, on anything that helps us achieve our goals better. Mobile, paper, computers, voodoo, whatever technology works.  Certainly in organizations.  And this yields some interesting implications.

So, for instance, this would include performance support and social networks.  Anything that requires understanding how people work and learn would be fair game. I was worried about whether that fit some operational aspects like IT and manufacturing processes, but I think I’ve got that sorted.  UI folks would work on external products, and any internal software development, but around that, helping folks use tools and processes belongs to those of us who facilitate organizational performance and development.  So we, and mlearning, are about any of those uses.

But the person, despite seeming to come from an vendor to orgs, not schools, could be talking about schools instead, and I wondered whether mLearning for schools, definitionally, really is about only supporting learning.  And I can see the case for that; that mlearning in education is about using mobile to help people learn, not perform.  It’s about collaboration, for sure, and tools to assist.

Note I’m not making the case for schools as they are, a curriculum rethink definitely needs to accompany using technology in schools in many ways.  Koreen Pagano wrote this nice post separating Common Core teaching versus assessment, which goes along with my beliefs about the value of problem solving.  And I also laud Roger Schank‘s views, such as the value (or not) of the binomial theorem as a classic example.

But then, mobile should be a tool in learning, so it can work as a channel for content, but also for communication, and capture, and compute (e.g. the 4C’s of mlearning).  And the emergent capability of contextual support (the 5th C, e.g. combinations of the first four).  So this view would argue that mlearning can be used for performance support in accomplishing a meaningful task that’s part of an learning experience.

That would take me back to mlearning being more than just mobile elearning, as Jason Haag has aptly separated.  Sure, mobile elearning can be a subset of mlearning, but not the whole picture. Does this make sense to you?

24 February 2015

Making ‘sense’

Clark @ 8:19 am

I recently wrote about wearables, where I focused on form factor and information channels.  An article I recently read talked about a guy who builds spy gear, and near the end he talked about some things that started me thinking about an extension of that for all mobile, not just wearables.  The topic is  sensors.

In the article, he talks about how, in the future, glasses could detect whether you’ve been around bomb-making materials:

“You can literally see residue on someone if your glasses emit a dozen different wavelengths of microlasers that illuminate clothing in real time and give off a signature of what was absorbed or reflected.”

That’s pretty amazing, chemical spectrometry on the fly.  He goes on to talk about distance vision:

“Imagine you have a pair of glasses, and you can just look at a building 50 feet away, 100 feet away, and look right through the building and see someone moving around.”

 Now, you might nor might not like what he’s doing with that, but imagine applying it elsewhere: identifying where people are for rescue, or identifying materials for quality control.

Heck, I’d find it interesting just to augment the camera with infrared and ultraviolet: imagine being able to use the camera on your phone or glasses to see what’s happening at night, e.g. wildlife (tracking coyotes or raccoons, and managing to avoid skunks!).  Night vision, and seeing things that fluoresce under UV would both be really cool additions.

I’d be interested too in having them able to work to enlarge as well, bring small things to light like a magnifying glass or microscope.

It made me think about all the senses we could augment. I was thinking about walking our dogs, and how their olfactory life is much richer than ours.  They are clearly sensing things beyond our olfactory capabilities, and it would be interesting to have some microscent detectors that could track faint traces to track animals (or know which owner is not adequately controlling a dog, ahem).  They could potentially serve as smoke or carbon monoxide detectors also.

Similarly, auditory enhancement: could we hear things fainter than our ears detect, or have them serve as a stethoscope?  Could we detect far off cries for help that our ears can’t? Of course, that could be misused, too, to eavesdrop on conversations.  Interesting ethical issues come in.

And we’ve already heard about the potential to measure one’s movement, blood pressure, pulse, temperature, and maybe even blood sugar, to track one’s health.  The fit bands are getting smarter and more capable.

There is the possibility for other things we personally can’t directly track: measuring ambient temperatures quantitatively, and air pressure are both already possible and in some devices.  The thermometer could be a health and weather guide, and a barometer/altimeter would be valuable for hiking in addition to weather.

The combination of reporting these could be valuable too.  Sensor nets, where the data from many micro sensors are aggregated have interesting possibilities. Either with known combinations, such as aggregating temperature and air pressure  help with weather, or machine learning  where for example we include sensitive motion detectors,  and might be able to learn to predict earthquakes like supposedly animals can.  Sounds too could be used to triangulate on cries for help, and material detectors could help locate sources of pollution.

We’ve done amazing things with technology, and sensors are both shrinking and getting more powerful. Imagine having sensors scattered about your body in various wearables and integrating that data in known ways, and agreeing for anonymous aggregation for data mining.  Yes, there are concerns, but benefits too.

We can put these together in interesting ways, notifications of things we should pay attention to, or just curiosity to observe things our natural senses can’t detect.  We can open up the world in powerful ways to support being more informed and more productive.  It’s up to us to harness it in worthwhile ways.

21 January 2015

Wearables?

Clark @ 8:22 am

In a discussion last week, I suggested that the things I was excited about included wearables. Sure enough, someone asked if I’d written anything about it, and I haven’t, much. So here are some initial thoughts.

I admit I was not a Google Glass ‘Explorer’ (and now the program has ended).  While tempted to experiment, I tend not to spend money until I see how the device is really going to make me more productive.  For instance, when the iPad was first announced, I didn’t want one. Between the time it was announced and it was available, however, I figured out how I’d use it produce, not just consume.   I got one the first day it came out.  By the same rationale, I got a Palm Pilot pretty early on, and it made me much more effective.   I haven’t gotten a wrist health band, on the other hand, though I don’t think they’re bad ideas, just not what I need.

The point being that I want to see a clear value proposition before I spend my hard earned money.  So what am I thinking in regards to wearables? What wearables do I mean?  I am talking wrist devices, specifically.  (I  may eventually warm up to glasses as well, when what they can do is more augmented reality than they do now.)  Why wrist devices?  That’s what I’m wrestling with, trying to conceptualize what is a more intuitive assessment.

Part of it, at least, is that it’s with me all the time, but in an unobtrusive way.  It supports a quick flick of the wrist instead of pulling out a whole phone. So it can do that ‘smallest info’ in an easy way. And, more importantly, I think it can bring things to my attention more subtly than can a phone.  I don’t need a loud ringing!

I admit that I’m keen on a more mixed-initiative relationship than I currently have with my technology.  I use my smartphone to get things I need, and it can alert me to things that I’ve indicated I’m interested in, such as events that I want an audio alert for.  And of course, for incoming calls.  But what about for things that my systems come up with on their own?  This is increasingly possible, and again desirable.  Using context, and if a system had some understanding of my goals, it might be able to be proactive. So imagine you’re out and about, and your watch reminds you that while you were  here you wanted to pick up something nearby, and provide the item and location.  Or to prep for that upcoming meeting and provide some minimal but useful info.   Note that this is not what’s currently on offer, largely.  We already have geofencing to do some, but right now for it to happen you largely have to pull out your phone or have it give a largely intrusive noise to be heard from your pocket or purse.

So two things about this: one why the watch and not the phone, and the other, why not the glasses? The watch form factor is, to me, a more accessible interface to serve as a interactive companion. As I suggested, pulling it out of the pocket, turning it on, going through the security check (even just my fingerprint), adds more of an overhead than I necessarily want.  If I can have something less intrusive, even as part of a system and not fully capable on it’s own, that’s OK.  Why not glasses? I guess it’s just that they seem more unnatural.  I am accustomed to having information on my wrist, and while I wear glasses, I want them to be invisible to me.  I would love to have a heads-up display at times, but all the time would seem to get annoying. I’ll stretch and suggest that the empirical result that most folks have stopped wearing them most of the time bears up my story.

Why not a ring, or a pendant, or?  A ring seems to have too small an interface area.  A pendant isn’t easily observable. On my wrist is easy for a glance (hence, watches).  Why not a whole forearm console?  If I need that much interface, I can always pull out my phone.  Or jump to my tablet. Maybe I will eventually will want an iBracer, but I’m not yet convinced. A forearm holster for my iPhone?  Hmmm…maybe too geeky.

So, reflecting on all this, it appears I’m thinking about tradeoffs of utility versus intrusion.  A wrist devices seems to fit a sweet spot in an ecosystem of tech for the quick glance, the pocket access, and then various tradeoffs of size and weight for a real productivity between tablets and laptops.

Of course, the real issue is whether there’s sufficient information available through the watch that it makes a value proposition. Is there enough that’s easy to get to that doesn’t require a phone?  Check the temperature?  Take a (voice) note?  Get a reminder, take a call, check your location? My instinct is that there is.  There are times I’d be happy to not have to take my phone (to the store, to a party) if I could take calls on my wrist, do minimal note taking and checking, and navigating.  For the business perspective, also have performance support whether push or pull.  I don’t see it for courses, but for just-in-time…  And contextual.

This is all just thinking aloud at this point.  I’m contemplating the iWatch but don’t have enough information as of yet.  And I may not feel the benefits outweigh the costs. We’ll see.

5 November 2014

#DevLearn 14 Reflections

Clark @ 9:57 am

This past week I was at the always great DevLearn conference, the biggest and arguably best yet.  There were some hiccups in my attendance, as several blocks of time were taken up with various commitments both work and personal, so for instance I didn’t really get a chance to peruse the expo at all.  Yet I attended keynotes and sessions, as well as presenting, and hobnobbed with folks both familiar and new.

The keynotes were arguably even better than before, and a high bar had already been set.

Neil deGrasse Tyson was eloquent and passionate about the need for science and the lack of match between school and life.    I had a quibble about his statement that doing math teaches problem-solving, as it takes the right type of problems (and Common Core is a step in the right direction) and it takes explicit scaffolding.  Still, his message was powerful and well-communicated. He also made an unexpected connection between Women’s Liberation and the decline of school quality that I hadn’t considered.

Beau Lotto also spoke, linking how our past experience alters our perception to necessary changes in learning.  While I was familiar with the beginning point of perception (a fundamental part of cognitive science, my doctoral field), he took it in very interesting and useful direction in an engaging and inspiring way.  His take-home message: teach not how to see but how to look, was succinct and apt.

Finally, Belinda Parmar took on the challenge of women in technology, and documented how small changes can make a big difference. Given the madness of #gamergate, the discussion was a useful reminder of inequity in many fields and for many.  She left lots of time to have a meaningful discussion about the issues, a nice touch.

Owing to the commitments both personal and speaking, I didn’t get to see many sessions. I had the usual situation of  good ones, and a not-so-good one (though I admit my criteria is kind of high).  I like that the Guild balances known speakers and topics with taking some chances on both.  I also note that most of the known speakers are those folks I respect that continue to think ahead and bring new perspectives, even if in a track representing their work.  As a consequence, the overall quality is always very high.

And the associated events continue to improve.  The DemoFest was almost too big this year, so many examples that it’s hard to start looking at them as you want to be fair and see all but it’s just too monumental. Of course, the Guild had a guide that grouped them, so you could drill down into the ones you wanted to see.  The expo reception was a success as well, and the various snack breaks suited the opportunity to mingle.  I kept missing the ice cream, but perhaps that’s for the best.

I was pleased to have the biggest turnout yet for a workshop, and take the interest in elearning strategy as an indicator that the revolution is taking hold.  The attendees were faced with the breadth of things to consider across advanced ID, performance support, eCommunity, backend integration, decoupled delivery, and then were led through the process of identifying elements and steps in the strategy.  The informal feedback was that, while daunted by the scope, they were excited by the potential and recognizing the need to begin.  The fact that the Guild is holding the Learning Ecosystem conference and their release of a new and quite good white paper by Marc Rosenberg and Steve Foreman are further evidence that awareness is growing.   Marc and Steve carve up the world a little differently than I do, but we say similar things about what’s important.

I am also pleased that Mobile interest continues to grow, as evidenced by the large audience at our mobile panel, where I was joined by other mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell.  They provide nicely differing viewpoints, with Sarah representing the irreverent designer, Robert the pragmatic systems perspective, and Chad the advanced technology view, to complement my more conceptual approach.  We largely agree, but represent different ways of communicating and thinking about the topic. (Sarah and I will be joined by Nick Floro for ATD’s mLearnNow event in New Orleans next week).

I also talked about trying to change the pedagogy of elearning in the Wadhwani Foundation, the approach we’re taking and the challenges we face.  The goal I’m involved in is job skilling, and consequently there’s a real need and a real opportunity.  What I’m fighting for is to make meaningful practice as a way to achieve real outcomes.  We have some positive steps and some missteps, but I think we have the chance to have a real impact. It’s a work in progress, and fingers crossed.

So what did I learn?  The good news is that the audience is getting smarter, wanting more depth in their approaches and breadth in what they address. The bad news appears to be that the view of ‘information dump & knowledge test = learning’ is still all too prevalent. We’re making progress, but too slowly (ok, so perhaps patience isn’t my strong suit ;).  If you haven’t, please do check out the Serious eLearning Manifesto to get some guidance about what I’m talking about (with my colleagues Michael Allen, Julie Dirksen, and Will Thalheimer).  And now there’s an app for that!

If you want to get your mind around the forefront of learning technology, at least in the organizational space, DevLearn is the place to be.

 

28 October 2014

Cognitive prostheses

Clark @ 8:05 am

While our cognitive architecture has incredible capabilities (how else could we come up with advances such as Mystery Science Theater 3000?), it also has limitations. The same adaptive capabilities that let us cope with information overload in both familiar and new ways also lead to some systematic flaws. And it led me to think about the ways in which we support these limitations, as they have implications for designing solutions for our organizations.

The first limit is at the sensory level. Our mind actually processes pretty much all the visual and auditory sensory data that arrives, but it disappears pretty quickly (within milliseconds) except for what we attend to. Basically, your brain fills in the rest (which leaves open the opportunity to make mistakes). What do we do? We’ve created tools that allow us to capture things accurately: cameras and microphones with audio recording. This allows us to capture the context exactly, not as our memory reconstructs it.

A second limitation is our ‘working’ memory. We can’t hold too much in mind at one time. We ‘chunk’ information together as we learn it, and can then hold more total information at one time. Also, the format of working memory largely is ‘verbal’. Consequently, using tools like diagramming, outlines, or mindmaps add structure to our knowledge and support our ability to work on it.

Another limitation to our working memory is that it doesn’t support complex calculations, with many intermediate steps. Consequently we need ways to deal with this. External representations (as above), such as recording intermediate steps, works, but we can also build tools that offload that process, such as calculators. Wizards, or interactive dialog tools, are another form of a calculator.

Processing information in short term memory can lead to it being retained in long term memory. Here the storage is almost unlimited in time and scope, but it is hard to get in there, and isn’t remembered exactly, but instead by meaning. Consequently, models are a better learning strategy than rote learning. But external sources like the ability to look up or search for information is far better than trying to get it in the head.

Similarly, external support for when we do have to do things by rote is a good idea. So, support for process is useful and the reason why checklists have been a ubiquitous and useful way to get more accurate execution.

In execution, we have a few flaws too. We’re heavily biased to solve new problems in the ways we’ve solved previous problems (even if that’s not the best approach. We’re also likely to use tools in familiar ways and miss new ways to use tools to solve problems. There are ways to prompt lateral thinking at appropriate times, and we can both make access to such support available, and even trigger same if we’ve contextual clues.

We’re also biased to prematurely converge on an answer (intuition) rather than seek to challenge our findings. Access to data and support for capturing and invoking alternative ways of thinking are more likely to prevent such mistakes.

Overall, our use of more formal logical thinking fatigues quickly. Scaffolding help like the above decreases the likelihood of a mistake and increases the likelihood of an optimal outcome.

When you look at performance gaps, you should look to such approaches first, and look to putting information in the head last. This more closely aligns our support efforts with how our brains really think, work, and learn. This isn’t a complete list, I’m sure, but it’s a useful beginning.

24 October 2014

#DevLearn Schedule

Clark @ 8:30 am

As usual, I will be at DevLearn (in Las Vegas) this next week, and welcome meeting up with you there.  There is a lot going on.  Here’re the things I’m involved in:

  • On Tuesday, I’m running an all day workshop on eLearning Strategy. (Hint: it’s really a Revolutionize L&D workshop ;).  I’m pleasantly surprised at how many folks will be there!
  • On Wednesday at 1:15 (right after lunch), I’ll be speaking on the design approach I’m leading at the Wadhwani Foundation, where we’re trying to integrate learning science with pragmatic execution.  It’s at least partly a Serious eLearning Manifesto session.
  • On Wednesday at 2:45, I’ll be part of a panel on mlearning with my fellow mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell, chaired by conference program director David Kelly.

Of course, there’s much more. A few things I’m looking forward to:

  • The keynotes:
    •  Neil DeGrasse Tyson, a fave for his witty support of science
    • Beau Lotto talking about perception
    • Belinda Parmar talking about women in tech (a burning issue right now)
  • DemoFest, all the great examples people are bringing
  • and, of course, the networking opportunities

DevLearn is probably my favorite conference of the year: learning focused, technologically advanced, well organized, and with the right people.  If you can’t make it this year, you might want to put it on your calendar for another!

21 October 2014

Extending Mobile Models

Clark @ 8:19 am

In preparation for a presentation, I was reviewing my mobile models. You may recall I started with my 4C‘s model (Content, Compute, Communicate, & Capture), and have mapped that further onto Augmenting Formal, Performance Support, Social, & Contextual.  I’ve refined it as well, separating out contextual and social as different ways of looking at formal and performance support.  And, of course, I’ve elaborated it again, and wonder whether you think this more detailed conceptualization makes sense.

self and social mlearning contentSo, my starting point was realizing that it wasn’t just content.  That is, there’s a difference between compute and content where the interactivity was an important part of the 4C’s, so that the characteristics in the content box weren’t discriminated enough.  So the new two initial sections are mlearning content and mlearning compute, by self or social.  So, we can be getting things for an individual, or it can be something that’s socially generated or socially enabled.

mLearningComputeThe point is that content is prepared media, whether text, audio, or video.  It can be delivered or accessed as needed. Compute, interactive capability, is harder, but potentially more valuable. Here, an individual might actively practice, have mixed initiative dialogs, or even work with others or tools to develop an outcome or update some existing shared resources.

mLearningCaptureThings get more complex when we go beyond these elements.  So I had capture as one thing, and I’m beginning to think it’s two: one is the capture of current context and keeping sharing that for various purposes, and the other is the system using that context to do something unique.

To be clear here, capture is where you use the text insertion, microphone, or camera to catch unique contextual data (or user input).  It could also be other such data, such as a location, time, barometric pressure, temperature, or more. This data, then, is available to review, reflect on, or more.  It can be combinations, of course, e.g. a picture at this time and this location.

mLearningContextualNow, if the system uses this information to do something different than under other circumstances, we’re contextualizing what we do. Whether it’s because of when you are, providing specific information, or where you are, using location characteristics, this is likely to be the most valuable opportunity.   Here I’m thinking alternate reality games or augmented reality (whether it’s voiceover, visual overlays, what have you).

And I think this is device independent, e.g. it could apply to watches or glasses or..as well as phones and tablets.  It means my 4 C’s become: content, compute, capture, and contextualize.  To ponder.

So, this is a more nuanced look at the mobile opportunities, and certainly more complex as well. Does the greater detail provide greater benefit?

 

 

17 September 2014

Learning in 2024 #LRN2024

Clark @ 8:14 am

The eLearning Guild is celebrating it’s 10th year, and is using the opportunity to reflect on what learning will look like 10 years from now.  While I couldn’t participate in the twitter chat they held, I optimistically weighed in: “learning in 2024 will look like individualized personal mentoring via augmented reality, AI, and the network”.  However, I thought I would elaborate in line with a series of followup posts leveraging the #lrn2024 hashtag.  The twitter chat had a series of questions, so I’ll address them here (with a caveat that our learning really hasn’t changed, our wetware hasn’t evolved in the past decade and won’t again in the next; our support of learning is what I’m referring to here):

1. How has learning changed in the last 10 years (from the perspective of the learner)?

I reckon the learner has seen a significant move to more elearning instead of an almost complete dependence on face-to-face events.  And I reckon most learners have begun to use technology in their own ways to get answers, whether via the Google, or social networks like FaceBook and LinkedIn.  And I expect they’re seeing more media such as videos and animations, and may even be creating their own. I also expect that the elearning they’re seeing is not particularly good, nor improving, if not actually decreasing in quality.  I expect they’re seeing more info dump/knowledge test, more and more ‘click to learn more‘, more tarted-up drill-and-kill.  For which we should apologize!

2. What is the most significant change technology has made to organizational learning in the past decade?

I reckon there are two significant changes that have happened. One is rather subtle as yet, but will be profound, and that is the ability to track more activity, mine more data, and gain more insights. The ExperienceAPI coupled with analytics is a huge opportunity.  The other is the rise of social networks.  The ability to stay more tightly coupled with colleagues, sharing information and collaborating, has really become mainstream in our lives, and is going to have a big impact on our organizations.  Working ‘out loud’, showing our work, and working together is a critical inflection point in bringing learning back into the workflow in a natural way and away from the ‘event’ model.

3. What are the most significant challenges facing organizational learning today?

The most significant change is the status quo: the belief that an information oriented event model has any relationship to meaningful outcomes.  This plays out in so many ways: order-taking for courses, equating information with skills, being concerned with speed and quantity instead of quality of outcomes, not measuring the impact, the list goes on.   We’ve become self-deluded that an LMS and a rapid elearning tool means you’re doing something worthwhile, when it’s profoundly wrong.  L&D needs a revolution.

4. What technologies will have the greatest impact on learning in the next decade? Why?

The short answer is mobile.  Mobile is the catalyst for change. So many other technologies go through the hype cycle: initial over-excitement, crash, and then a gradual resurgence (c.f. virtual worlds), but mobile has been resistant for the simple reason that there’s so much value proposition.  The cognitive augmentation that digital technology provides, available whenever and wherever you are clearly has benefits, and it’s not courses!  It will naturally incorporate augmented reality with the variety of new devices we’re seeing, and be contextualized as well.  We’re seeing a richer picture of how technology can support us in being effective, and L&D can facilitate these other activities as a way to move to a more strategic and valuable role in the organization.  As above, also new tracking and analysis tools, and social networks.  I’ll add that simulations/serious games are an opportunity that is yet to really be capitalized on.  (There are reasons I wrote those books :)

5. What new skills will professionals need to develop to support learning in the future?

As I wrote (PDF), the new skills that are necessary fall into two major categories: performance consulting and interaction facilitation.  We need to not design courses until we’ve ascertained that no other approach will work, so we need to get down to the real problems. We should hope that the answer comes from the network when it can, and we should want to design performance support solutions if it can’t, and reserve courses for only when it absolutely has to be in the head. To get good outcomes from the network, it takes facilitation, and I think facilitation is a good model for promoting innovation, supporting coaching and mentoring, and helping individuals develop self-learning skills.  So the ability to get those root causes of problems, choose between solutions, and measure the impact are key for the first part, and understanding what skills are needed by the individuals (whether performers or mentors/coaches/leaders) and how to develop them are the key new additions.

6. What will learning look like in the year 2024?

Ideally, it would look like an ‘always on’ mentoring solution, so the experience is that of someone always with you to watch your performance and provide just the right guidance to help you perform in the moment and develop you over time. Learning will be layered on to your activities, and only occasionally will require some special events but mostly will be wrapped around your life in a supportive way.  Some of this will be system-delivered, and some will come from the network, but it should feel like you’re being cared for in the most efficacious way.

In closing, I note that, unfortunately,my Revolution book and the Manifesto were both driven by a sense of frustration around the lack of meaningful change in L&D. Hopefully, they’re riding or catalyzing the needed change, but in a cynical mood I might believe that things won’t change near as much as I’d hope. I also remember a talk (cleverly titled: Predict Anything but the Future :) that said that the future does tend to come as an informed basis would predict with an unexpected twist, so it’ll be interesting to discover what that twist will be.

16 September 2014

On the Road Fall 2014

Clark @ 8:05 am

Fall always seems to be a busy time, and I reckon it’s worthwhile to let you know where I’ll be in case you might be there too! Coming up are a couple of different events that you might be interested in:

September 28-30 I’ll be at the Future of Talent retreat  at the Marconi Center up the coast from San Francisco. It’s a lovely spot with a limited number of participants who will go deep on what’s coming in the Talent world. I’ll be talking up the Revolution, of course.

October 28-31 I’ll be at the eLearning Guild’s DevLearn in Las Vegas (always a great event; if you’re into elearning you should be there).  I’ll be running a Revolution workshop (I believe there are still a few spots), part of  a mobile panel, and talking about how we are going about addressing the challenges of learning design at the Wadhwani Foundation.

November 12-13 I’ll be part of the mLearnNow event in New Orleans (well, that’s what I call it, they call it LearnNow mobile blah blah blah ;).  Again, there are some slots still available.  I’m honored to be co-presenting with Sarah Gilbert and Nick Floro (with Justin Brusino pulling strings in the background), and we’re working hard to make sure it should be a really great deep dive into mlearning.  (And, New Orleans!)

There may be one more opportunity, so if anyone in Sydney wants to talk, consider Nov 21.

Hope to cross paths with you at one or more of these places!

Next Page »

Powered by WordPress