Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Designing Microlearning

10 May 2017 by Clark 6 Comments

Yesterday, I clarified what I meant about microlearning. Earlier, I wrote about designing microlearning, but what I was really talking about was the design of spaced learning. So how should you design the type of microlearning I really feel is valuable?

To set the stage, here’re we’re talking about layering learning on performance in a context. However, it’s more than just performance support. Performance support would be providing a set of steps (in whatever ways: series of static photos, video, etc) or supporting those steps (checklist, lookup table, etc).  And again, this is a good thing, but microlearning, I contend, is more.

To make it learning, what you really need is to support developing an ability to understand the rationale behind the steps, to support adapting the steps in different situations. Yes, you can do this in performance support as well, but here we’re talking about  models.  

What (causal) models give us is a way to explain what has happened, and predict what will happen.  When we make these available around performing a task, we unpack the rationale. We want to provide an understanding behind the rote steps, to support adaptation of the process in difference situations. We also provide a basis for regenerating missing steps.

Now, we can also be providing examples, e.g. how the model plays out in different contexts. If what the learner is doing now can change under certain circumstances, elaborating how the model guides  performing differently in different context provides the ability to transfer that understanding.

The design process, then, would be to identify the model guiding the performance (e..g.  why  we do things in this order, and it might be an interplay between structural constraints (we have to remove this screw first because…) and causal ones (this is the chemical that catalyzes the process).  We need to identify and determine how to represent.

Once we’ve identified the task, and the associated models, we  then need to make these available through the context. And here’s why I’m excited about augmented reality, it’s an obvious way to make the model visible. Quite simply, it can be layered  on top of the task itself!   Imagine that the workings behind what you’re doing are available if you want. That you can explore more as you wish, or not, and simply accept the magic ;).

The actual task  is the practice, but I’m suggesting providing a model explaining  why it’s done this way is the minimum, and providing examples for a representative sample of other appropriate contexts provides support when it’s a richer performance.  Delivered, to be clear, in the context itself. Still, this is what I think  really constitutes microlearning.  So what say you?

Clarifying Microlearning

9 May 2017 by Clark 5 Comments

I was honored to learn that a respected professor of educational technology liked my definition of micro-learning, such that he presented it as a recent conference.  He asked if I still agreed with it, and I looked back at what I’d written more recently. What I found was that I’d suggested some alternate interpretations, so I thought it worthwhile to be absolutely clear about it.

So, the definition he cited was:

Microlearning is a small, but complete, learning experience, layered on top of the task learners are engaged in, designed to help learners learn how to perform the task.

And I agree with this, with a caveat. In the article, I’d said that it could  also be a small complete learning experience, period. My clarification on this is that those are unlikely, and the definition he cited was the most likely, and likely most valuable.

So, I’ve subsequently said  (and elaborated on the necessary steps):

What I really think microlearning could and should be is for spaced learning.

Here I’m succumbing to the hype, and trying to put a positive spin on microlearning. Spaced learning is a good thing, it’s just not microlearning. And microlearning really isn’t helping them perform the task in  the moment (which is a good thing too), but instead leveraging that moment to also extend their understanding.

No, I like the original definition, where we layer learning on top of a task, leveraging the context and requiring the minimal content to take a task and make it a learning opportunity. That, too, is a good thing. At least I think so. What do you think?

To LMS or not to LMS

3 May 2017 by Clark 5 Comments

A colleague recently asked (in general, not me specifically) whether there’s a role for LMS functions. Her query was about the value of having a place to see (recommended) courses, to track your development, etc. And that led me to ponder, and here’s my thinking:

My question is  where to draw the line. Should you do social learning in the LMS version of that, or have a separate system? If using the LMS for social around courses (a good thing), how do you handle the handoff to the social tool used for teams and communities?  It would seem to make sense to use the regular tool in the courses as well, to make it part of the habit.

Similarly, should you host non-course resources in the LMS  or out in a portal (which is employee-focused, not siloed)? Maybe the courses also make more sense in the portal, tracked with xAPI?  I think I’d like to track self-learning, via accessing videos and documents the same as I would formal learning with courses: I want to be able to correlate them with business to test the outputs of experiments in changes.

Again, how should I be handling signups for things?  I handle signups for all sorts of things via tools like Eventbrite.  Is asking to signup for a training, with a waiting list, different than other events such as a team party?

Now, for representing your learning, is that an LMS role, or an LRS dashboard, or…?  From a broader perspective, is it talent management or performance management or…?

I’m not saying an LMS doesn’t make sense, but it seems like it’s a minor tool at best, not the central organizing function.  I get that it’s not a learning management system, but a course management system, but is that the right metaphor?  Do we want a learning tracking system instead, and is that what an LMS if or could be for?

When we start making a continuum between formal and informal learning, what’s the right suite of tools? I want to find courses and other things through a federated search of *all* resources. And I want to track many things besides course completions, because those courses should have real world-related assignments, so they’re tracked as work, not learning. Or both. And I want to track things that we’re developing through coaching, or continuing development through coaching and stretch assignments. Is that an LMS, or…?

I have no agenda  to put the LMS out of business, as long as it makes sense in modern workplace learning. However, we  want to use the right tool for the right job, and create an ecosystem that supports us doing the right thing.    I don’t have an obvious answer, I’m just trying on a rethink (yes, thinking out loud ;), and wondering what your thoughts are.  So, what is the right way to think about this? Do you see a uniquely valuable aggregation of services that makes sense? (And I may have to dig in deeper and think about the essential components and map them out, then we can determine what the right suites of functions  are  to fulfill those needs.)

To show or not to show (and when)

2 May 2017 by Clark Leave a Comment

At  an event the other evening, showing various career technology tools, someone  said something that I thought was just wrong. I asked afterwards, and then  explained why I thought it was wrong. The response was “well, there can be different ways to go about it”. And frankly, there really can’t.  Think for yourself about why I might say so, and then let me show you why.

The trigger was a  design program talking about their design courses. And the representative was saying that once a learner had created a project, it was shown to everybody. Which sounds good, since ‘sharing is caring’, or at least it’s a good example of working out loud. And, in general, this is a good idea. But I think it’s not in learning.

In brainstorming (e.g. informal learning), we know that sharing  before others have had their  chance to think, it can color their output. This limits the exploration of the total possible space of opportunities that would come from a diverse team. Hearing another response likely will limit  that  spaces that might get explored. Instead, the goal is to diverge before converging.

And so, too, in learning. I’ve argued for assignment submission systems that only allow you to see the other submissions  once you’ve submitted your own. Until you’ve struggled yourself with the challenge, you won’t  get the most out of seeing how others have solved the situation.

If you immediately share the first submission, it may affect those who aren’t that far along yet.  Some may even end up holding off to see what others do! This undermines the integrity of the assignment. One explanation that was given was to provide guidance to others, but that, to me, is the role of the assignment specification.

There is, however, real value in seeing the other submissions once you’ve completed yours. Seeing other approaches helps broaden the understanding. Better yet is to have discussion  on them, as when  critiquing others (constructively) you internalize the monitoring. This discussion  also  provides the opportunity to experiment with working out loud that eventually develops good working habits.

(I’ve similarly argued, by the way, that ‘rollover’ questions  -where the answer is shown once you move your pointer over the question- don’t lead to any meaningful learning. If you haven’t made the mental effort to  commit to a response, it won’t stick as well.)

So I believe that, if you’re developing people’s ability to  do, you have a responsibility to do so in the most advantageous way. That includes making effort to use the best approach to sharing assignments. I was surprised (and dismayed) to see someone arguing to the contrary! I implore you to do the details on the approaches you work, for your learners’, and the learning’s, sake.

Innovation Thoughts

27 April 2017 by Clark Leave a Comment

So I presented on innovation to the local ATD chapter a few weeks ago, and they did an interesting and nice thing: they got the attendees to document their takeaways. And I promised to write a blog post about it, and I’ve finally received the list of thoughts, so here are my reflections.  As an aside, I’ve written separate articles on L&D innovation recently for both CLO magazine and the Litmos blog  so you can check those out, too.

I started talking about why  innovation was needed, and then what it was.  They recalled that I pointed out that by definition an innovation is not only a new idea, but one that is implemented  and leads to better results.  I made the point that when you’re innovating, designing, researching, trouble-shooting, etc, you don’t know the answer when you start, so they’re  learning situations, though  informal,  not formal.  And they heard me note that agility and adaptation are premised on informal learning of this sort, and that the opportunity is for L&D to take up the mantle to meed the increasing need.

There was interest but some lack of clarity  around meta-learning. I emphasize that learning to learn may be your best investment, but  given that you’re devolving responsibility you shouldn’t assume that individuals are automatically possessed of optimal learning skills. The focus then becomes developing learning to learn skills, which of needs is done  across some other topic. And, of course, it requires the right culture.

There were some terms they heard that they weren’t necessarily clear on, so per the request, here are the terms (from them) and my definition:

  • Innovation by Design: here I mean deliberately creating an environment where innovation can flourish. You can’t plan for innovation, it’s ephemeral, but you can certainly create a felicitous environment.
  • Adjacent Possible: this is a term Steven Johnson used in his book Where Good Ideas Come From, and my take is that it means that lateral inspiration (e.g. ideas from nearby: related fields or technologies) is where innovation happens, but it takes exposure to those ideas.
  • Positive Deviance:  the idea (which I heard of from Jane Bozarth) is that the best way to find good ideas is to find people who are excelling and figure out what they’re doing differently.
  • Hierarchy and Equality: I’m not quite sure what they were referring to hear (I think more along the lines of  Husband’s Wirearchy versus hierarchy) but the point is to reduce the levels and start tapping into the contributions possible from all.
  • Assigned roles and vulnerability: I’m even less certain what’s being referred to here (I can’t be responsible for everything people take away ;), but I could interpret this to mean that it’s hard to be safe to contribute if you’re in a hierarchy and are commenting on someone above  you.  Which again is an issue of safety (which is why I advocate that leaders ‘work out loud’, and it’s a core element of Edmondson’s Teaming; see below).

I used the Learning Organization Dimensions diagram (Garvin, Edmondson & Gino)  to illustrate the components of successful innovation environment, and these were reflected in their comments. A number mentioned  psychological safety in particular as well as  the other elements of the learning environment. They also picked up on the importance of  leadership.

Some other notes that they picked up on included:

  • best principles instead of best practices
  • change is facilitated when the affected individual choose to  change
  • brainstorming needs individual work before collective work
  • that trust is required to devolve responsibility
  • the importance of coping with ambiguity

One that was provided  that I know I didn’t say because I don’t believe it, but is interesting as a comment:

“Belonging trumps diversity, and security trumps grit”

This is an interesting belief, and I think that’s likely the case if it’s  not safe to experiment and make mistakes.

They recalled some of the books I mentioned, so here’s the list:

  • The Invisible Computer  by Don Norman
  • The Design of Everyday Things  by Don Norman
  • My  Revolutionize Learning and Development  (of course ;)
  • XLR8 by John Kotter (with the ‘dual operating system‘ hypothesis)
  • Teaming to Innovate by Amy Edmondson (I reviewed it)
  • Working Out Loud by John Stepper
  • Scaling Up Excellence by Robert I. Sutton and Huggy Rao (blogged)
  • Organize for Complexity by Niels Pflaeging (though they heard this as a concept, not a title)

It was a great evening, and really rewarding to see that many of the messages stuck.  So, what are your thought around innovation?

 

Human Learning is Not About to Change Forever

26 April 2017 by Clark 1 Comment

In my inbox was an announcement about a new white paper with the intriguing title  Human Learning is About to Change Forever.  So naturally I gave up my personal details to download a copy.  There are nine claims in the paper, from the obvious to the ridiculous. So I thought I’d have some fun.

First, let’s get clear.  Our learning runs on our brain, our wetware. And that’s not changing in any fundamental way in the near future. As a famous article once had it: phenotypic plasticity triumphs over genotypic plasticity (in short, our human advantage has gained    via  our ability to adapt individually and learn from each other, not through  species evolution).   The latter takes a long time!

And as a starting premise, the “about to” bit implies these things are around the corner, so that’s going to be a bit of my critique. But nowhere near  all of it.  So here’s a digest of the  nine claims and my comments:

  1. Enhanced reality tools will transform the learning environment.  Well, these tools will  certainly augment the learning environment  (pun intended :). There’s evidence that VR leads to better learning outcomes, and I have high hopes for AR, too. Though is that a really fundamental transition? We’ve had VR and virtual worlds for over a decade at least.  And is VR a evolutionary or revolutionary change from simulations? Then they go on to talk about performance support. Is that transforming learning? I’m on record saying contextualized learning (e.g. AR) is the real opportunity to do something interesting, and I’ll buy it, but we’re a long way away. I’m all for AR and VR, but saying that it puts learning in the hands of the students is a design issue, not a technology issue.
  2. People will learn collaboratively, no matter where they are.  Um, yes, and…?  They’re already doing this, and we’ve been social learners for as long as we’ve existed. The possibilities in virtual worlds to collaboratively create in 3D I still think is potentially cool, but even as the technology limitations come down, the cognitive limitations remain. I’m big on social learning, but mediating it through technology strikes me as just a natural step, not transformation.
  3. AI will banish intellectual tedium. Everything is  awesome.  Now we’re getting a wee bit hypish. The fact that software can parse text and create questions is pretty impressive. And questions about semantic knowledge aren’t going to transform education. Whether the questions are developed by hand, or by machine, they aren’t likely on their own to lead to new abilities to do. And AI is not yet to the level (nor will it be soon) where it can take content and create compelling activities that will drive learners to apply knowledge and make it meaningful.
  4. We will maximize our mental potential with wearables and neural implants. Ok, now we’re getting confused and a wee bit silly. Wearables are cool, and in cases where they can sense things about you and the world means they can start doing some very interesting AR. But transformative? This still seems like a push.  And neural implants?  I don’t like surgery, and messing with my nervous system when you still don’t really understand it? No thanks.  There’s a lot more to it than managing to adjust firing to control limbs. The issue is again about the semantics: if we’re not getting meaning, it’s not really fundamental. And given that our conscious representations are scattered across our cortex in rich patterns, this just isn’t happening soon (nor do I want that much connection; I don’t trust them not to ‘muck about’).
  5. Learning will be radically personalized.  Don’t you just love the use of superlatives?  This is in the realm of plausible, but as I mentioned before, it’s not worth it until we’re doing it on  top of good design.  Again, putting together wearables (read: context sensing) and personalization will lead to the ability to do transformative AR, but we’ll need a new design approach, more advanced sensors, and a lot more backend architecture and semantic work than we’re yet ready to apply.
  6. Grades and brand-name schools won‘t matter for employment.  Sure, that MIT degree is worthless! Ok, so there’s some movement this way.  That will actually be a nice state of affairs. It’d be good  if we started focusing on competencies, and build new brand names around real enablement. I’m not optimistic about the prospects, however. Look at how hard it is to change K12 education (the gap  between what’s known and what’s practiced hasn’t significantly diminished in the past decades). Market forces may change it, but the brand names will adapt too, once it becomes an economic necessity.
  7. Supplements will improve our mental performance.  Drink this and you’ll fly! Yeah, or crash.  There are ways I want to play with my brain chemistry, and ways I don’t. As an adult!  I really don’t want us playing with children, risking potential long-term damage, until we have a solid basis.  We’ve had chemicals support performance for a while (see military use), but we’re still in the infancy, and here I’m not sure our experiments with neurochemicals can surpass what evolution has given us, at least not without some pretty solid understanding.  This seems like long-term research, not near-term plausibility.
  8. Gene editing will give us better brains.  It’s  alive!  Yes, Frankenstein’s monster comes to mind here. I do believe it’s possible that we’ll be able to outdo evolution eventually, but I reckon there’s still not everything known about the human genome  or the human brain. This similarly strikes me as a valuable long term research area, but in the short term there are so many interesting gene interactions we don’t yet understand, I’d hate to risk the possible side-effects.
  9. We won‘t have to learn: we‘ll upload and download knowledge. Yeah, it’ll be  great!  See my comments above on neural implants: this isn’t yet ready for primetime.  More importantly, this is supremely dangerous. Do I trust what you say you’re making available for download?  Certainly not the case now with many things, including advertisements. Think about downloading to your computer: not just spam ads, but viruses and malware.  No thank you!  Not that I think it’s close, but I’m not convinced we can ‘upgrade our operating system’ anyway. Given the way that our knowledge is distributed, the notion of changing it with anything less than practice seems implausible.

Overall, this is reads like more a sci-fi fan’s dreams than a realistic assessment of what we should be preparing for.  No, human learning isn’t going to change forever.  The ways we learn, e.g. the tools we learn with are changing, and we’re rediscovering how we really learn.

There are better guides available to what’s coming in the near term that we should prepare for.  Again, we need to focus on good learning design, and leveraging technology in ways that align with how our brains work, not trying to meld the two.  So, there’re my opinions, I welcome yours.

Workplace of the Future video

25 April 2017 by Clark 2 Comments

Someone asked for a video on the  Workplace of the Future  project, so I created one. Thought I’d share it with you, too.  Just a walkthrough with some narration, talking about some of the design decisions.

One learning for me (that I’m sure you knew): a script really helps!  It took multiple tries, for a variety of reasons.  I’m not a practiced video creator, so gentle, please!

Top 10 Tools for @C4LPT 2017

19 April 2017 by Clark Leave a Comment

Jane Hart is running her annual Top 100 Tools for Learning poll  (you can vote too), and here’s my contribution for this year.  These  are my personal learning tools, and are ordered  according to Harold Jarche’s Seek-Sense-Share models, as ways to find answers, to process them, and to share for feedback:

  1. Google Search is my go-to tool when I come across something I haven’t heard of. I typically will choose the Wikipedia link if there is one, but also will typically open several other links and peruse across them to generate a broader perspective.
  2. I use GoodReader on my iPad to read PDFs and mark up journal submissions.  It’s handy for reading when I travel.
  3. Twitter  is one of several ways I keep track of what people are thinking about and looking at. I need to trim my list again, as it’s gotten pretty long, but I keep reminding myself it’s drinking from the firehose, not full consumption!  Of course, I share things there too.
  4. LinkedIn is another tool I use to see what’s happening (and occasionally engage in). I have a group for the Revolution,  which largely is me posting things but I do try to stir up conversations.  I also see and occasionally comment on posting by others.
  5. Skype  let’s me  stay in touch with my ITA colleagues, hence it’s definitely a learning tool. I also use it occasionally to have conversations with folks.
  6. Slack is another tool I use with some groups  to stay in touch. People share there, which makes it useful.
  7. OmniGraffle is my diagramming tool, and diagramming is a way I play with representing my understandings. I will put down some concepts in shapes, connect them, and tweak until I think I’ve captured what I believe. I also use it to mindmap keynotes.
  8. Word is a tool I use to play with words as another way to explore my thinking. I use outlines heavily and I haven’t found a better way to switch between outlines and prose. This is where things like articles, chapters, and books come from. At least until I find a better tool (haven’t really got my mind around Scrivener’s organization, though I’ve tried).
  9. WordPress is my blogging tool (what I’m using here),  and serves both as a thinking tool (if I write it out, it forces me to process it), but it’s also a share tool (obviously).
  10. Keynote is my presentation tool. It’s where I’ll noodle out ways to share my thinking. My presentations  may get rendered to Powerpoint eventually out of necessity, but it’s my creation and preferred presentation tool.

Those are my tools, now what are yours?  Use the link to let Jane know, her collection and analysis of the tools is always interesting.

What you learn not as important as how you learn!

18 April 2017 by Clark Leave a Comment

I’m going a bit out on a limb here, with a somewhat heretical statement: what you learn is more important than how you learn!  (You could say pedagogy supersedes curricula, but that’s just being pedantic. ;)  And I’m pushing the boundaries of the concept a bit, but I think it’s worth floating as an idea. It’s meta-learning, of course, learning how to learn!   The important point is to focus on what’s being developed.  And I mean this at two levels.

This was triggered by seeing two separate announcements of new learning opportunities.  Both are focused on current skills, so both are focusing on advanced curricula, things that are modern. While the pedagogy of one isn’t obvious (though claimed to be very practical), the other clearly touts the ways in which the learning happens. And it’s good.

So the pedagogy is very hands on. In fact, it’s an activity-based curricula (in my terms), in that you progress by completing assignments very closely tied to what you’ll do on the job. There are content resources available (e.g. expert videos) and instructor feedback, all set in a story.  And this is better than a content-based curricula, so this pedagogy is really very apt for preparing people to do jobs.  In fact, they are currently applying it across three different roles that they have determined  are necessary.

But if you listen to the longer version  (video) of my activity-based learning curricula story, you’ll see I carry the pedagogy forward. I talk about handing over responsibility to the learner, gradually, to take responsibility for the activities, content, product, and reflection.  This is important for learners to start becoming self-improving learners.  The point is to develop their ability to do  meta-learning.

To do so, by the way, requires that you make your pedagogy visible  for the choices that you made, and why.  Learners, to adopt their own pedagogy, need to see  a pedagogy. If you narrate your pedagogy, that is document your alternatives and rationales of choices, they can actually understand more about the learning process itself.

And this, to me, is the essence of the claim. If you start a learning process about  something, and then hand off responsibility for the learning, while making clear the choices that led there, learners become self-learners. The courses that are designed in the above two cases  will, of necessity, change. And graduates from those courses might be out of date before long,  unless they’ve learned  how to stay current. Unless they’ve learned meta-learning.  That can be added in, and it may be implicit, but I’ll suggest that learning to learn is a more valuable long-term outcome than the immediate employability.

So that’s my claim: in the long term, the learner (and society) will be better off if the learner can learn to self-improve.  It’s not an immediate claim or benefit, but it can be wrapped around something that  is of immediate benefit.  It’s the ‘secret sauce’ that organizations could be adding in, whether internally or in their offerings. What surprises me is how seldom I see this approach taken, or even discussed.

Artificial Intelligence or Intelligence Augmentation

12 April 2017 by Clark Leave a Comment

In one of my networks, a recent conversation has been on Artificial Intelligence (AI) vs Intelligence Augmentation (IA). I’m a fan of both, but my focus is more on the IA side. It triggered some thoughts that I penned to them and thought I’d share here [notes to clarify inserted with square brackets like this]:

As context, I‘m an AI ‘groupie’, and was a grad student at UCSD when Rumelhart and McClelland were coming up with PDP (parallel distributed processing, aka connectionist or neural networks). I personally was a wee bit enamored of genetic algorithms, another form of machine learning (but a bit easier to extract semantics, or maybe just simpler for me to understand ;).

Ed Hutchins was talking about distributed cognition at the same time, and that remains a piece of my thinking about augmenting ourselves. We don‘t do it all in our heads, so what can be in the world and what has to be in the head?  [the IA bit, in the context of Doug Engelbart]

And yes, we were following fuzzy logic too (our school was definitely on the left-coast of AI ;).  Symbolic logic was considered passe‘! Maybe that‘s why Zadeh [progenitor of fuzzy logic] wasn‘t more prominent here (making formal logic probabilistic may have seemed like patching a bad core premise)?  And I managed (by hook and crook, courtesy of Don Norman ;) to attend an elite AI convocation held at an MIT retreat with folks like McCarthy, Dennett, Minsky, Feigenbaum, and other lights of both schools.  (I think Newell were there, but I can‘t state for certain.)  It was groupie heaven!

Similarly, it was the time of emergence of ‘situated cognition‘ too (a contentious debate with proponents like Greeno and even Bill Clancy while old school symbolics like Anderson and Simon argued to the contrary).  Which reminds me of Harnad‘s Symbol Grounding problem, a much meatier objection to real AI than Dreyfuss’ or the Chinese room concerns, in my opinion.

I do believe we ultimately  will achieve machine consciousness, but it‘s much further out than we think. We‘ll have to understand our own consciousness first, and that‘s going to be tough, MRI and other such research not withstanding. And it may mean simulating our cognitive architecture on a sensor equipped processor that must learn through experimentation and feedback as we do. e.g. taking a few years just to learn to speak! (“What would it take to build a baby” was a developmental psych assignment I foolishly attempted ;)

In the meantime, I agree with Roger Schank (I think he was at the retreat too) that most of what we‘re seeing, e.g. Watson, is just fast search, or pattern-learning. It‘s not really intelligent, even if it‘s doing it like we do (the pattern learning). It‘s useful, but it‘s not intelligent.

And, philosophically, I agree with those who have stated that we must own the responsibility to choose what we take on and what we outsource. I‘m all for self-driving vehicles, because the alternative is pretty bad (tho‘ could we do better in driver training or licensing, like in Germany?).  And I do want my doctor augmented by powerful rote operations that surpass our own abilities, and also by checklists and policies and procedures, anything that increases the likelihood of a good diagnosis and prescription.  But I want my human doctor in the loop.  We still haven‘t achieved the integration of separate pattern-matching, and exception handling, that our own cognitive processor provides.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok