Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

6 October 2015

Mobile Time

Clark @ 8:05 am

At the recent DevLearn conference, David Kelly spoke about his experiences with the Apple Watch.  Because I don’t have one yet, I was interested in his reflections.  There were a number of things, but what came through for me (and other reviews I’ve read) is that the time scale is a factor.

Now, first, I don’t have one because as with technology in general, I don’t typically acquire anything in particular until I know how it’s going to make me more effective.  I may have told this story before, but for instance I didn’t wasn’t interested in acquiring an iPad when they were first announced (“I’m not a content consumer“). By the time they were available, however, I’d heard enough about how it would make me more productive (as a content creator), that I got one the first day it was available.

So too with the watch. I don’t get a lot of notifications, so that isn’t a real benefit.   The ability to be navigated subtly around towns sounds nice, and to check on certain things.  Overall, however, I haven’t really found the tipping-point use-case.  However, one thing he said triggered a thought.

He was talking about how it had reduced the amount of times he accessed his phone, and I’d heard that from others, but here it struck a different cord. It made me realize it’s about time frames. I’m trying to make useful conceptual distinctions between devices to try to help designers figure out the best match of capability to need. So I came up with what seemed an interesting way to look at it.

Various usage times by category: wearable, pocketable, bag able.This is similar to the way I’d seen Palm talk about the difference between laptops and mobile, I was thinking about the time you spent in using your devices.  The watch (a wearable)  is accessed quickly for small bits of information.  A pocketable (e.g. a phone) is used for a number of seconds up to a few minutes.  And a tablet tends to get accessed for longer uses (a laptop doesn’t count).  Folks may well have all 3, but they use them for different things.

Sure, there are variations, (you can watch a movie on a phone, for instance; phone calls could be considerably longer), but by and large I suspect that the time of access you need will be a determining factor (it’s also tied to both battery life and screen size). Another way to look at it would be the amount of information you need to make a decision about what to do, e.g. for cognitive work.

Not sure this is useful, but it was a reflection and I do like to share those. I welcome your feedback!

2 October 2015

Natalie Panek #DevLearn Keynote Mindmap

Clark @ 12:11 pm

To close off the DevLearn conference, Natalie Panek (@nmpanek) told of her learning journey to be a space engineer with compelling stories of challenging experiences.  With an authentic and engaging style, she helped inspire us to keep learning.

1 October 2015

Adam Savage #DevLearn Keynote Mindmap

Clark @ 9:26 am

Adam Savage gave a thoughtful, entertaining, and ultimately moving talk about how Art and Science are complementary components of what makes us human. He continued telling stories that kept us laughing while learning, and ended on a fabulous note about being willing to be vulnerable as a person and a parent.  Truly a great keynote.

30 September 2015

Connie Yowell #DevLearn Keynote Mindmap

Clark @ 4:58 pm

Connie Yowell gave a passionate and informing presentation on the driving forces behind digital badges.

David Pogue #DevLearn Keynote Mindmap

Clark @ 10:41 am

David Pogue addressed the DevLearn audience on Learning Disruption. In a very funny and insightful presentation, he ranged from the Internet of Things, thru disintermediation and wearables, pointing out disruptive trends. He concluded by talking about the new generation and the need to keep trying new things. 

Tech travails

Clark @ 10:39 am

Today I attended David Pogue’s #DevLearn Keynote.  And, as a DevLearn ‘official blogger’, I was expected to mindmap it (as I regularly do). So, I turn on my iPad and have had a steady series of problems. The perils of living in a high tech world.

First, when I opened my diagramming software, OmniGraffle, it doesn’t work. I find out they’ve stopped supporting this edition! So, $50 later (yes, it’s almost unconscionably dear) and sweating out the download (“will it finish in time”), I start prepping the mindmap. 

Except the way it does things are different. How do I add break points to an arrow?!?  Well, I can’t find a setting, but I finally explore other interface icons and find a way. The defaults are different, but manage to create a fairly typical mindmap.  Phew.

So, I export to Photos and open WordPress. After typing in my usual insipid prose, I go to add the image. And it starts, and fails.  I try again, and it’s reliably failing. I reexport, and try again. Nope. I get the image over to my iPhone to try it there, to no avail.

I’ve posted the image to the conference app, but it’s not going to appear here until I get back to my room and my laptop.  Grr. 

Oh well, that’s life in this modern world, eh?



24 September 2015

Looking forward on content

Clark @ 8:04 am

At DevLearn next week, I’ll be talking about content systems in session 109.  The point is that instead of monolithic content, we want to start getting more granular for more flexible delivery. And while there I’ll be talking about some of the options on how, here I want to make the case about why, in a simplified way.

As an experiment (gotta keep pushing the envelope in a myriad of ways), I’ve created a video, and I want to see if I can embed it.  Fingers crossed.  Your feedback welcome, as always.


23 September 2015

Revolution Roadmap: Assess

Clark @ 8:07 am

Last week, I wrote about a process to follow in moving forward on the L&D Revolution. The first step is Assess, and I’ve been thinking about what that means.   So here, let me lay out some preliminary thoughts.

The first level are the broad categories.  As I’m talking about aligning with how we think, work, and learn, those are the three top areas where I feel we fail to recognize what’s known about cognition, individually and together. As I mentioned yesterday, I’m looking at how we use technology to facilitate productivity in ways specifically focused on helping people learn. But let me be clear, here I’m talking about the big picture of learning – problem-solving, design, research, innovation, etc – as they call fall under the category of things we don’t know the answer to when we begin.

I started with how we think. Too often we don’t put information in the world when we can, yet we know that all our thinking isn’t in our head.  So we can ask :

  • Are you using performance consulting?
  • Are you taking responsibility for resource development?
  • Are you ensuring the information architecture for resources is user-focused?

The next area is working, and here the revelation is that the best outcomes come from people working together.  Creative friction, when done in consonance with how we work together best, is where the best solutions and the best new ideas will come from. So you can look at:

  • Are people communicating?
  • Are people collaborating?
  • Do you have in place a learning culture?

Finally, with learning, as the area most familiar to L&D, we need to look at whether we’re applying what’s known about making learning work.  We should start with Serious eLearning, but we can go farther.  Things to look at include:

  • Are you practicing deeper learning design?
  • Are you designing engagement into learning?
  • Are you developing meta-learning?

In addition to each of these areas, there are cross-category issues.  Things to look at for each include:

  • Do you have infrastructure?
  • What are you measuring?

All of these areas have nuances underneath, but at the top level these strike me as the core categories of questions.  This is working down to a finer grain than I looked at in the book (c.f. Figure 8.1), though that was a good start at evaluating where one is.

I’m convinced that the first step for change is to understand where you are (before the next step, Learn, about where you could be).  I’ve yet to see many organizations that are in full swing here, and I have persistently made the case that the status quo isn’t sufficient.  So, are you ready to take the first step to assess where you are?


22 September 2015

Biz tech

Clark @ 8:28 am

One of my arguments for the L&D revolution is the role that L&D could be playing.  I believe that if L&D were truly enabling optimal execution as well as facilitating continual innovation (read: learning), then they’d be as critical to the organization as IT. And that made me think about how this role would differ.

To be sure, IT is critical.  In today’s business, we track our business, do our modeling, run operations, and more with IT.  There is plenty of vertical-specific software, from product design to transaction tracking, and of course more general business software such as document generation, financials, etc.  So how does L&D be as ubiquitous as other software?  Several ways.

First, formal learning software is really enterprise-wide.  Whether it’s simulations/scenarios/serious games, spaced learning delivered via mobile, or user-generated content (note: I’m deliberately avoiding the LMS and courses ;), these things should play a role in preparing the audience to optimally execute and being accessed by a large proportion of the audience.  And that’s not including our tools to develop same.

Similarly, our performance support solutions – portals housing job aids and context-sensitive support – should be broadly distributed.  Yes, IT may own the portals, but in most cases they are not to be trusted to do a user- and usage-centered solution.  L&D should be involved in ensuring that the solutions both articulate with and reflect the formal learning, and are organized by user need not business silo.

And of course the social network software – profiles and locators as well as communication and collaboration tools – should be under the purview of L&D. Again, IT may own them or maintain them, but the facilitation of their use, the understanding of the different roles and ensuring they’re being used efficiently, is a role for L&D.

My point here is that there is an enterprise-wide category of software, supporting learning in the big sense (including problem-solving, research, design, innovation), that should be under the oversight of L&D.  And this is the way in which L&D becomes more critical to the enterprise.  That it’s not just about taking people away from work and doing things to them before sending them back, but facilitating productive engagement and interaction throughout the workflow.  At least at the places where they’re stepping outside of the known solutions, and that is increasingly going to be the case.

17 September 2015


Clark @ 8:03 am

Last Friday’s #GuildChat was on Agile Development.  The topic is interesting to me, because like with Design Thinking, it seems like well-known practices with a new branding. So as I did then, I’ll lay out what I see and hope others will enlighten me.

As context, during grad school I was in a research group focused on user-centered system design, which included design, processes, and more. I subsequently taught interface design (aka Human Computer Interaction or HCI) for a number of years (while continuing to research learning technology), and made a practice of advocating the best practices from HCI to the ed tech community.  What was current at the time were iterative, situated, collaborative, and participatory design processes, so I was pretty  familiar with the principles and a fan. That is, really understand the context, design and test frequently, working in teams with your customers.

Fast forward a couple of decades, and the Agile Manifesto puts a stake in the ground for software engineering. And we see a focus on releasable code, but again with principles of iteration and testing, team work, and tight customer involvement.  Michael Allen was enthused enough to use it as a spark that led to the Serious eLearning Manifesto.

That inspiration has clearly (and finally) now moved to learning design. Whether it’s Allen’s SAM or Ger Driesen’s Agile Learning Manifesto, we’re seeing a call for rethinking the old waterfall model of design.  And this is a good thing (only decades late ;).  Certainly we know that working together is better than working alone (if you manage the process right ;), so the collaboration part is a win.

And we certainly need change.  The existing approaches we too often see involve a designer being given some documents, access to a SME (if lucky), and told to create a course on X.  Sure, there’re tools and templates, but they are focused on making particular interactions easier, not on ensuring better learning design. And the person works alone and does the design and development in one pass. There are likely to be review checkpoints, but there’s little testing.  There are variations on this, including perhaps an initial collaboration meeting, some SME review, or a storyboard before development commences, but too often it’s largely an independent one way flow, and this isn’t good.

The underlying issue is that waterfall models, where you specify the requirements in advance and then design, develop, and implement just don’t work. The problem is that the human brain is pretty much the most complex thing in existence, and when we determine a priori what will work, we don’t take into account the fact that like Heisenberg what we implement will change the system. Iterative development and testing allows the specs to change after initial experience.  Several issues arise with this, however.

For one, there’s a question about what is the right size and scope of a deliverable.  Learning experiences, while typically overwritten, do have some stricture that keeps them from having intermediately useful results. I was curious about what made sense, though to me it seemed that you could develop your final practice first as a deliverable, and then fill in with the required earlier practice, and content resources, and this seemed similar to what was offered up during the chat to my question.

The other one is scoping and budgeting the process. I often ask, when talking about game design, how to know when to stop iterating. The usual (and wrong answer) is when you run out of time or money. The right answer would be when you’ve hit your metrics, the ones you should set before you begin that determine the parameters of a solution (and they can be consciously reconsidered as part of the process).  The typical answer, particularly for those concerned with controlling costs, is something like a heuristic choice of 3 iterations.  Drawing on some other work in software process, I’d recommend creating estimates, but then reviewing them after. In the software case, people got much better at estimates, and that could be a valuable extension.  But it shouldn’t be any more difficult to estimate, certainly with some experience, than existing methods.

Ok, so I may be a bit jaded about new brandings on what should already be good practice, but I think anything that helps us focus on developing in ways that lead to quality outcomes is a good thing.  I encourage you to work more collaboratively, develop and test more iteratively, and work on discrete chunks. Your stakeholders should be glad you did.


Next Page »

Powered by WordPress