Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

18 August 2015

Where in the world is…

Clark @ 8:09 am

It’s time for another game of Where’s Clark?  As usual, I’ll be somewhat peripatetic this fall, but more broadly scoped than usual:

  • First I’ll be hitting Shenzhen, China at the end of August to talk advanced mlearning for a private event.
  • Then I’ll be hitting the always excellent DevLearn in Las Vegas at the end of September to run a workshop on learning science for design (you should want to attend!) and give a session on content engineering.
  • At the beginning of November I’ll be at LearnTech Asia in Singapore, with an impressive lineup of fellow speakers to again sing the praises of reforming L&D.

Yes, it’s quite the whirl, but with this itinerary I should be somewhere near you almost anywhere you are in the world. (Or engage me to show up at your locale!) I hope to see you at one event or another before the year is out.

 

11 August 2015

Content engineering

Clark @ 8:09 am

We’ve heard about learning engineering and while the focus is on experience design, the pragmatics include designing content to create the context, resources, and motivation for the activity.  And it’s time we step beyond just hardwiring this content together, and start treating it as professionals.

Look at business websites these days. You can customize the content you’re searching for with filters.  The content reacts to the device you’re on and displays appropriately.  There can even be content that is specific to your particular trace of action through the site and previous visits.  Just look at Amazon or Netflix recommendations!

This doesn’t happen by hardwired sites anymore.  If you look at the conferences around content, you’ll find that they’re talking industrial strength solutions.  They use content management systems, carefully articulated with tight definitions and associated tags, and rules that pull together those content elements by definition into the resulting site.  This is content engineering, and it’s a direction we need to go.

What’s involved is tighter templates around content roles, metadata describing the content, and management of the content. You write into the system, describe it, and pull it out by description, not by hard link. This allows flexibility and rules that can pull differentially by different contexts: different people, different role, different need, and different device. We also separate out what it says from how it looks, using tags to support rendering appropriately on different devices rather than hard-coding the appearance as well as the content and the assembly.

This is additional work, but the reasons are several.  First, being tighter around content definitions provides a greater opportunity to be scientific about the role the content plays. We’re too lax in our content, so that beyond a good objective, we don’t specify what makes a good example, etc.   Second, by using a system to maintain that content, we can get more rigorous in content management.  I regularly ask audiences whether they have outdated legacy content hanging around, and pretty much everyone agrees. This isn’t effective content governance, and content should have regular cycles of review and expiry dates.

By this tighter process, we not only provide better content design, delivery, and management, but we set the stage for the future.  Personalization and customization, contextualization, are hampered when you have to hand-configure every option you will support. It’s much easier to write a new set of rules and then your content can serve new purposes, new business models, and more.

If you want to know more about this, I hope to see you at my session on content at DevLearn!

29 July 2015

The future of libraries?

Clark @ 8:30 am

I had lunch recently with Paul Signorelli, who’s active in helping libraries with digital literacy, and during the conversation he talked about his vision of the future of the library. What I heard was a vision of libraries moving beyond content to be about learning, and this had several facets I found thought-provoking.

Now, as context, I’ve always been a fan of libraries and library science (and librarians). They were some of the first to deal with the issues involved in content organization, leading to information science, and their insight into tagging and finding is still influencing content architecture and engineering.  But here we’re talking about the ongoing societal role of libraries.

First, to be about learning, it has to be about experience, not content. This is the crux of a message I’ve tried to present to publishers, when they were still wrestling with the transition from book to content!  In this case, it’s an interesting proposition about how libraries would wrap their content to create learning experiences.

Interestingly, Paul also suggested that he was thinking broader, about how libraries could also point to people who could help. This is a really intriguing idea, about libraries becoming a local broker between expertise and needs.  Not all the necessary resources are books or even print, and as libraries are now providing video and audio as well as print, and on to computer access to resources beyond the library’s collection, so too can it be about people.

This is a significant shift, but it parallels the oft-told story of marketing myopia, e.g. about how railroads aren’t about trains but instead are about transportation.  What is the role of the library in the era of the internet, of self-help.

One role, of course, is to be the repository of research skills, about digital literacy (which is where this conversation had started).  However, this notion of being a center of supporting learning, not just a center of content, moves those literacy skills to include learning as well!  But it goes further.

This notion turns the role of a library into a solution: whether you  need to get something done, learn something, or more, e.g. more than just learning but also performance support and social, becoming the local hub for helping people succeed.  He aptly pointed out how this is a natural way to use the fact that libraries tend to exist on public money; to become an even richer part of supporting the community.

It’s also, of course, an interesting way to think about how the locus of supporting people shifts from L&D and library to a joint initiative.  Whether there’s still a corporate library is an open question, but it may be a natural partner to start thinking about a broader perspective for L&D in the organization. I’m still pondering the ways in which libraries could facilitate learning (just as trainers should become learning facilitators, so too should librarians?).

 

7 July 2015

2015 top 10 tools for learning

Clark @ 7:39 am

Jane Hart has been widely and wisely known for her top 100 Tools for Learning (you too can register your vote).  As a public service announcement, I list my top 10 tools for learning as well:

  1. Google search: I regularly look up things I hear of and don’t know.  It often leads me to Wikipedia (my preferred source, teachers take note), but regularly (e.g. 99.99% of the time) provides me with links that give me the answer i need.
  2. Twitter: I am pointed to many amazing and interesting things via Twitter.
  3. Skype: the Internet Time Alliance maintains a Skype channel where we regularly discuss issues, and ask and answer each other’s questions.
  4. Facebook: there’s another group that I use like the Skype channel, and of course just what comes in from friends postings is a great source of lateral input.
  5. WordPress: my blogging tool, that provides regular reflection opportunities for me in generating them, and from the feedback others provide via comments.
  6. Microsoft Word: My writing tool for longer posts, articles, and of course books, and writing is a powerful force for organizing my thoughts, and a great way to share them and get feedback.
  7. Omnigraffle: the diagramming tool I use, and diagramming is a great way for me to make sense of things.
  8. Keynote: creating presentations is another way to think through things, and of course a way to share my thoughts and get feedback.
  9. LinkedIn: I share thoughts there and track a few of the groups (not as thoroughly as I wish, of course).
  10. Mail: Apple’s email program, and email is another way I can ask questions or get help.

Not making the top 10 but useful tools include Google Maps for directions, Yelp for eating,  Good Reader as a way to read and annotate PDFs, and Safari, where I’ve bookmarked a number of sites I read every day like news (ABC and Google News), information on technology, and more.

So that’s my list, what’s yours?  I note, after the fact, that many are social media. Which isn’t a surprise, but reinforces just how social learning is!

Share with Jane in one of the methods she provides, and it’s always interesting to see what emerges.

26 May 2015

Evolutionary versus revolutionary prototyping

Clark @ 8:14 am

At a recent meeting, one of my colleagues mentioned that increasingly people weren’t throwing away prototypes.  Which prompted reflection, since I have been a staunch advocate for revolutionary prototyping (and here I’m not talking about “the” Revolution ;).

When I used to teach user-centered design, the tools for creating interfaces were complex. The mantras were test early, test often, and I advocated Double Double P’s (Postpone Programming, Prefer Paper; an idea I first grabbed from Rob Phillips then at Curtin).  The reason was that if you started building too early in the design phase, you’d have too much invested to throw things away if they weren’t working.

These days, with agile programming, we see sprints producing working code, which then gets elaborated in subsequent sprints.  And the tools make it fairly easy to work at a high level, so it doesn’t take too much effort to produce something. So maybe we can make things that we can throw out if they’re wrong.

Ok, confession time, I have to say that I don’t quite see how this maps to elearning.  We have sprints, but how do you have a workable learning experience and then elaborate it?  On the other hand, I know Michael Allen’s doing it with SAM and Megan Torrance just had an article on it, but I’m not clear whether they’re talking storyboard, and then coded prototype, or…

Now that I think about it, I think it’d be good to document the core practice mechanic, and perhaps the core animation, and maybe the spread of examples.  I’m big on interim representations, and perhaps we’re talking the same thing. And if not, well, please educate me!

I guess the point is that I’m still keen on being willing to change course if we’ve somehow gotten it wrong.  Small representations is good, and increasing fidelity is fine, and so I suppose it’s okay if we don’t throw out prototypes often as long as we do when we need to.  Am I making sense, or what am I missing?

20 May 2015

Symbiosis

Clark @ 8:12 am

One of the themes I’ve been strumming in presentations is one where we complement what we do well with tools that do well the things we don’t. A colleague reminded me that JCR Licklider wrote of this decades ago (and I’ve similarly followed the premise from the writings of Vannevar Bush, Doug Engelbart, and Don Norman, among others).

We’re already seeing this.   Chess has changed from people playing people, thru people playing computers and computers playing computers, to computer-human pairs playing other computer-human pairs. The best competitors aren’t the best chess players or the best programs, but the best pairs, that is the player and computer that best know how to work together.

The implications are to stop trying to put everything in the head, and start designing systems that complement us in ways that assure that the combination is the optimized solution to the problem being confronted. Working backwards [], we should decide what portion should be handled by the computer, and what by the person (or team), and then design the resources and then training the humans to use the resources in context to achieve the goals.

Of course, this is only in the case of known problems, the ‘optimal execution’ phase of organizational learning. We similarly want to have the right complements to support the ‘continual innovation’ phase as well. What that means is that we have to be providing tools for people to communicate, collaborate, create representations, access and analyze data, and more. We need to support ways for people to draw upon and contribute to their communities of practice from their work teams. We need to facilitate the formation of work teams, and make sure that this process of interaction is provided with just the right amount of friction.

Just like a tire, interaction requires friction. Too little and you go skidding out of control. Too much, and you impede progress. People need to interact constructively to get the best outcomes. Much is known about productive interaction, though little enough seems to make it’s way into practice.

Our design approaches need to cover the complete ecosystem, everything from courses and resources to tools and playgrounds. And it starts by looking at distributed cognition, recognizing that thinking isn’t done just in the head, but in the world, across people and tools. Let’s get out and start playing instead of staying in old trenches.

25 March 2015

Tom Wujec #LSCon Keynote Mindmap

Clark @ 7:02 am

Tom Wujec gave a discursive and well illustrated talk about how changes in technology were changing industry, ultimately homing in on creativity.  Despite a misstep mentioning Kolb’s invalid learning styles instrument, it was entertaining and intriguing.

 

24 March 2015

Tech Limits?

Clark @ 8:26 am

A couple of times last year, firms with some exciting learning tools approached me to talk about the market.  And in both cases, I had to advise them that there were some barriers they’d have to address. That was brought home to me in another conversation, and it makes me worry about the state of our industry.

So the first tool is based upon a really sound pedagogy that is consonant with my activity-based learning approach.  The basis is giving learners assignments very much like the assignments they’ll need to accomplish in the workplace, and then resourcing them to succeed.  They wanted to make it easy for others to create these better learning designs (as part of a campaign for better learning). The only problem was, you had to learn the design approach as well as the tool. Their interface wasn’t ready for prime time, but the real barrier was getting people to be able to use a new tool. I indicated some of the barriers, and they’re reconsidering (while continuing to develop content against this model as a service).

The second tool supports virtual role plays in a powerful way, having smart agents that react in authentic ways. And they, too, wanted to provide an authoring tool to create them.  And again my realistic assessment of the market was that people would have trouble understanding the tool.  They decided to continue to develop the experiences as a service.

Now, these are somewhat esoteric designs, though the former should be the basis of our learning experiences, and the latter would be a powerful addition to support a very common and important type of interaction.  The more surprising, and disappointing, issue came up with a conversation earlier this year with a proponent of a more familiar tool.

Without being specific (I’ve not received permission to disclose the details in all of the above), this person indicated that when training a popular and fairly straightforward tool, that the biggest barrier wasn’t the underlying software model. I was expecting that too much of training was based upon rote assignments without an underlying model, and that is the case, but instead there was a more fundamental barrier: too many potential users just didn’t have sufficient computer skills!  And I’m not talking about programming code, but instead fundamental understandings of files and ‘styles‘ and other core computing elements just were not present in sufficient quantities in these would-be authors. Seriously!

Now I’ve complained before that we’re not taking learning design seriously, but obviously we’re compounded by a lack of fundamental computer skills.  Folks, this is elearning, not chalk learning, not chalk talk, not edoing, etc.  If you struggle to add new apps on your computer, or find files, you’re not ready to be an elearning developer.

I admit that I struggle to see how folks can assume that without knowledge of design, nor knowledge of technology, that they can still be elearning designers and developers. These tools are scaffolding to allow your designs to be developed. They don’t do design, nor will they magically cover up for lacks of tech literacy.

So, let’s get realistic.  Learn about learning design, and get comfortable with tech, or please, please, don’t do elearning.  And I promise not to do music, architecture, finance, and everything else I’m not qualified to. Fair enough?

 

17 March 2015

Making Sense of Research

Clark @ 7:37 am

A couple of weeks ago, I was riffing on sensors: how mobile devices are getting equipped with all sorts of new sensors and the potential for more and what they might bring.  As part of that discussion was a brief mention of sensor nets, how aggregating all this data could be of interest too. And low and behold, a massive example was revealed last week.

The context was the ‘spring forward’ event Apple held where they announced their new products.  The most anticipated one was the Apple Watch (which was part of the driving behind my post on wearables), the new iConnected device for your wrist.  The second major announcement was their new Macbook, a phenomenally thin new laptop with some amazing specs on weight and screen display, as well as some challenging tradeoffs.

One announcement that was less noticed was the announcement of a new research endeavor, but I wonder if it isn’t the most game-changing element of them all.  The announcement was ResearchKit, and it’s about sensor nets.

So, smartphones have lots of sensors.  And the watch will have more.  They can already track a number of parameters about you automatically, such as your walking.  There can be more, with apps that can ask about your eating, weight, or other health measurements.  As I pointed out, aggregating data from sensors could do things like identify traffic jams (Google Maps already does this), or collect data like restaurant ratings.

What Apple has done is to focus specifically on health data from their HealthKit, and partner with research hospitals. What they’re saying to scientists is “we’ll give you anonymized health data, you put it to good use”. A number of research centers are on board, and already collecting data about asthma and more.  The possibility is to use analytics that combine the power of large numbers with a bunch of other descriptive data to be able to investigate things at scale.  In general, research like this is hard since it’s hard to get large numbers of subjects, but large numbers of subjects is a much better basis for study (for example, the China-Cornell-Oxford Project that was able to look at a vast breadth of diet to make innovative insights into nutrition and health).

And this could be just the beginning: collecting data en masse (while successfully addressing privacy concerns) can be a source of great insight if it’d done right.  Having devices that are with you and capable of capturing a variety of information gives the opportunity to mine that data for expected, and unexpected, outcomes.

A new iDevice is always cool, and while it’s not the first smart watch (nor was the iPhone the first smartphone, the iPad not the first tablet, nor the iPod the first music play), Apple has a way of making the experience compelling.  Like with the iPad, I haven’t yet seen the personal value proposition, so I’m on the fence.  But the ability to collect data in a massive way that could support ground-breaking insights and innovations in medicine? That has the potential for affecting millions of people around the world.  Now that is impact.

24 February 2015

Making ‘sense’

Clark @ 8:19 am

I recently wrote about wearables, where I focused on form factor and information channels.  An article I recently read talked about a guy who builds spy gear, and near the end he talked about some things that started me thinking about an extension of that for all mobile, not just wearables.  The topic is  sensors.

In the article, he talks about how, in the future, glasses could detect whether you’ve been around bomb-making materials:

“You can literally see residue on someone if your glasses emit a dozen different wavelengths of microlasers that illuminate clothing in real time and give off a signature of what was absorbed or reflected.”

That’s pretty amazing, chemical spectrometry on the fly.  He goes on to talk about distance vision:

“Imagine you have a pair of glasses, and you can just look at a building 50 feet away, 100 feet away, and look right through the building and see someone moving around.”

 Now, you might nor might not like what he’s doing with that, but imagine applying it elsewhere: identifying where people are for rescue, or identifying materials for quality control.

Heck, I’d find it interesting just to augment the camera with infrared and ultraviolet: imagine being able to use the camera on your phone or glasses to see what’s happening at night, e.g. wildlife (tracking coyotes or raccoons, and managing to avoid skunks!).  Night vision, and seeing things that fluoresce under UV would both be really cool additions.

I’d be interested too in having them able to work to enlarge as well, bring small things to light like a magnifying glass or microscope.

It made me think about all the senses we could augment. I was thinking about walking our dogs, and how their olfactory life is much richer than ours.  They are clearly sensing things beyond our olfactory capabilities, and it would be interesting to have some microscent detectors that could track faint traces to track animals (or know which owner is not adequately controlling a dog, ahem).  They could potentially serve as smoke or carbon monoxide detectors also.

Similarly, auditory enhancement: could we hear things fainter than our ears detect, or have them serve as a stethoscope?  Could we detect far off cries for help that our ears can’t? Of course, that could be misused, too, to eavesdrop on conversations.  Interesting ethical issues come in.

And we’ve already heard about the potential to measure one’s movement, blood pressure, pulse, temperature, and maybe even blood sugar, to track one’s health.  The fit bands are getting smarter and more capable.

There is the possibility for other things we personally can’t directly track: measuring ambient temperatures quantitatively, and air pressure are both already possible and in some devices.  The thermometer could be a health and weather guide, and a barometer/altimeter would be valuable for hiking in addition to weather.

The combination of reporting these could be valuable too.  Sensor nets, where the data from many micro sensors are aggregated have interesting possibilities. Either with known combinations, such as aggregating temperature and air pressure  help with weather, or machine learning  where for example we include sensitive motion detectors,  and might be able to learn to predict earthquakes like supposedly animals can.  Sounds too could be used to triangulate on cries for help, and material detectors could help locate sources of pollution.

We’ve done amazing things with technology, and sensors are both shrinking and getting more powerful. Imagine having sensors scattered about your body in various wearables and integrating that data in known ways, and agreeing for anonymous aggregation for data mining.  Yes, there are concerns, but benefits too.

We can put these together in interesting ways, notifications of things we should pay attention to, or just curiosity to observe things our natural senses can’t detect.  We can open up the world in powerful ways to support being more informed and more productive.  It’s up to us to harness it in worthwhile ways.

Next Page »

Powered by WordPress