Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Sharing pointedly or broadly

16 October 2014 by Clark 3 Comments

In a (rare) fit of tidying, I was moving from one note-taking app to another, and found a diagram I’d jotted, and it rekindled my thinking. The point was characterizing social media in terms of their particular mechanisms of distribution. I can’t fully recall what prompted the attempt at characterization, but one result of revisiting was thinking about the media in terms of whether they’re part of a natural mechanism of ‘show your work’ (ala Bozarth)/’work out loud’ (ala Jarche).

whether person to person or one to manyThe question revolves around whether the media are point or broadcast, that is whether you specify particular recipients (even in a mailing or group list), or whether it’s ‘out there’ for anyone to access.  Now, there are distinctions, so you can have restricted access on the ‘broadcast’ mode, but in principle there’re two different mechanisms at work.

It should be noted that in the ‘broadcast’ model, not everyone may be aware that there’s a new message, if they’re not ‘following’ the poster of the message, but it should be findable by search if not directly.  Also, the broadcast may only be an organizational network, or it can be the entire internet.  Regardless, there are differences between the two mechanisms.

So, for example, a chat tool typically lets you ping a particular person, or a set list. On the other hand, a microblog lets anyone decide to ‘follow’ your quick posts.   Not everyone will necessarily be paying attention to the ‘broadcast’, but they could.  Typically, microblogs (and chat) are for short messages, such as requests for help or pointers to something interesting.  The limitations mean that more lengthy  discussions typically are conveyed via…

Formats supporting unlimited text, including thoughtful reflections, updates on thinking, and more tend to be conveyed via email or blog posts. Again, email is addressed to a specific list of people, directly or via a mail list, openly or perhaps some folks receiving copies ‘blind’ (that is, not all know who all is receiving the message.  A blog post (like this), on the other hand, is open for anyone on the ‘system’.

The same holds true for other media files besides text.   Video and audio can be hidden in a particular place (e.g. a course) or sent directly to one person. On the other hand, such a message can be hosted on a portal (YouTube, iTunes) where anyone can see.  The dialog around a file provides a rich augmentation, just as such can be happening on a blog, or edited RTs of a microblog comment.

Finally, a slightly different twist is shown with documents.  Edited documents (e.g. papers, presentations, spreadsheets) can be created and sent, but there’s little opportunity for cooperative development.  Creating these in a richer way that allows for others to contribute requires a collaborative document (once known as a wiki).  One of my dreams is that we may have collaboratively developed interactives as well, though that still seems some way off.

The point for showing out loud is that point is only a way to get specific feedback, whereas a broadcast mechanism is really about the opportunity to get a more broad awareness and, potentially, feedback.  This leads to a broader shared understanding and continual improvement, two goals critical to organizational improvement.

Let me be the first to say that this isn’t necessarily an important, or even new, distinction, it’s just me practicing what I preach.  Also, I   recognize that the collaborative documents are fundamentally different, and I need to have a more differentiated way to look at these (pointers or ideas, anyone), but here’s my interim thinking.  What say you?

#itashare

Types of meaningful processing

14 October 2014 by Clark 1 Comment

In an previous post, I argued for different types and ratios for  worthwhile learning activities. I’ve been thinking about this (and working on it) quite a bit lately. I know there are other resources that I should know about (pointers welcome), but I’m currently wrestling with several types of situations and wanted to share my thinking. This is aside from scenarios/simulations (e.g. games) that are the first, best, learning practice you can engage in, of course. What I’m looking for is ways to get learners to do processing in ways that will assist their ability to  do.  This isn’t recitation, but application.

So one situation is where the learner has to execute  the right procedure. This seems easy, but the problem is that they’re liable to get it right  in practice.  The problem is that they still can get it wrong when in real situations. An idea I had heard of before, but was reiterated through Socratic Arts  (Roger Schank & cohorts) was to have learners observe (e.g. video) of someone performing it and identifying whether it was right or not. This is a more challenging task than  just doing it right for many routine but important tasks (e.g. sanitation). It has learners monitor the process, and then they can turn that on themselves to become self-monitoring.  If the selection of mistakes is broad enough, they’ll have experience that will transfer to their whole performance.

Another task that I faced earlier was the situation where people had to interpret guidelines to make a decision. Typically, the extreme cases  are obvious, and instructors argue that they all are, but in reality there are many ambiguous situations.  Here, as I’ve argued before, the thing to do is have folks work in groups and be presented with increasingly ambiguous situations. What emerges from the discussion is usually a rich unpacking of the elements.  This processing of the rules in context exposes the underlying issues in important ways.

Another type of task is helping people understand applying models to make decisions. Rather than present them with the models, I’m again looking for more meaningful processing.  Eventually I’ll expect learners to make decisions with them, but as a scaffolding step, I’m asking them to interpret the models in terms of their recommendations for use.  So before I have them engage in scenarios, I’ll ask them to use the models to create, say, a guide to how to use that information. To diagnose, to remedy, to put in place initial protections.  At other times, I’ll have them derive subsequent processes from the theoretical model.

One other example I recall came from a paper  that Tom Reeves wrote (and I can’t find) where he had learners pick from a number of options that indicated problems or actions to take. The interesting difference was then there was a followup question about why. Every choice was two stages: decision and then rationale. This is a very clever way to see if they’re not just getting the right answer but can understand why it’s right.  I wonder if any of the authoring tools on the market right now include such a template!

I know there are  more categories of learning and associated tasks that require useful processing (towards do, not  know, mind you ;), but here are a couple that are ‘top of mind’ right now. Thoughts?

 

 

The resurgence of games?

8 October 2014 by Clark Leave a Comment

I talked yesterday about how some concepts may not resonate immediately, and need to continue to be raised until the context is right.  There I was talking about explorability and my own experience with service science, but it occurred to me that the same may be true of games.

Now, I’ve been pushing games as a vehicle for learning for a long time, well before my book came out on the topic.  I strongly believe that next to mentored live practice (which doesn’t scale well), (serious) games are the next best learning opportunity.  The reasons are strong:

  • safe practice: learners can make mistakes without real consequences (tho’ world-based ones can play out)
  • contextualized practice (and feedback): learning works better in context rather than on abstract problems
  • sufficient practice: a game engine can give essentially infinite replay
  • adaptive practice: the game can get more difficult to develop the learner to the necessary level
  • meaningful practice: we can choose the world and story to be relevant and interesting to learners

the list goes on.  Pretty much all the principles of the Serious eLearning Manifesto are addressed in games.

Now, I and others (Gee, Aldrich, Shaffer, again the list goes on) have touted this for years.  Yet we haven’t seen as much progress as we could and should.  It seemed like there was a resurgence around 2009-2010, but then it seemed to go quiet again. And now, with Karl Kapp’s Gamification book and the rise of interest in gamification, we have yet another wave of interest.

Now, I’m not a fan of the extrinsic  gamification, but it appears there’s a growing awareness of the difference  between extrinsic and intrinsic. And I’m seeing more use of games to develop understanding in at least K12 circles.  Hopefully, the awareness will arise in higher ed and corp too.

As some fear, it’s too costly, but my response is twofold:

  • games aren’t as expensive as you fear; there are lots of opportunities for games in lower price ranges (e.g. $100K), don’t buy into the $1M and up mentality
  • they’re actually likely to be effective (as part of a complete learning experience), compared to many if not most of the things being done in learning

So I hope we might finally go beyond Clicky Clicky Bling Bling, (tarted quiz shows, cheesy videos and more) and get to interaction that actually leads to change.  Here’s hoping!

Service Thinking and the Revolution?

7 October 2014 by Clark Leave a Comment

A colleague I greatly respect, who has a track record of high impact in important positions, has been a proponent of service science.  And I confess that it hadn’t really penetrated.  Yet last week I heard about it in a way that resonated much more strongly and got me thinking, so let me share where it’s leading my thinking, and see what you say.

One time I heard something exciting, a concept called interface ‘explorability‘ when I was doing a summer internship at NASA while a grad student.  When I brought it back to the lab, my advisor didn’t really resonate.  Then, some time later (a year or two)  he was discussing a concept and I mentioned that it sounded a lot like that ‘explorability’, and he suddenly wanted to know more. The point being that there is a time when you’re ready to hear a message. And that’s me with service science.

The concept is considering a mutual value generation process between provider and customer, and engineering it across the necessary system components and modular integrations to yield a successful solution.  As organizations need to be more customer-centric, this perspective yields processes to do that in a very manageable, measurable way.  And that’s the perspective I’d been missing when I’d previously heard about it, but Hastings  & Saperstein presented it last  week at the Future of Talent event in the form of Service Thinking, which brought the concept home.

I wondered how it compared to Design Thinking, another concept sweeping instructional design and related fields, and it appears to be synergistic but perhaps a superset. While nothing precludes Design Thinking from producing the type of outcome Service Thinking is advocating, I’m inferring that Service Thinking is a bit more systematic and higher level.

The interesting idea for me was to think of bringing Service Thinking to the role of L&D in the organization. If we’re looking systematically at how we can bring value to the customer, in this case the organization, systematically, we have a chance to look at the bigger picture, the Performance & Development view instead of the training view.  If we take the perspective of an integrated approach to meeting organizational execution and innovation needs, we may naturally develop the performance ecosystem.

We need to take a more comprehensive approach, where we integrate technology capabilities, resources, and people into an integrated whole. I’m looking at service thinking, as perhaps an integration of the rigor of systems thinking with the creative customer focus of design thinking, as at least another way to get us there.  Thoughts?

Constructive vs instructive

1 October 2014 by Clark Leave a Comment

A commenter on last week’s post asked an implicit question that caused me to think. The issue was whether the solutions I was proposing are having the learners be self directed or whether it was ‘push’ learning.  And I reckon there’s a bit of both, but I’m fighting for more of a constuctivist approach   than the instructivist model.

I’ve argued in the past for a more active learning, and I think the argument for pure instructivism  sets up a straw man (Feuerzeig argued for guided discovery back in ’85!).  Obviously, I think that pure exploration is doomed to failure, as we know that learners can stay in one small corner of a search space without support (hence the coaching in Quest).  However, a completely guided experience doesn’t ‘stick’ as well, either.

Another factor is our target learners.  In my experience, more constructivist approaches can be disturbing to learners who have had more instructivist approaches.  And the learners we are dealing with haven’t been that successful in school, and typically  need a lot of scaffolding.

Yet our goals are fairly pragmatic overall (and in general we should be looking for ways to pragmatic in more of our learning). We’re focused on meaningful skills, so we should leverage this.

In this case, I’m moving the design to more and more “here’s a goal, here’re some resources” type of approach where the goal is to generate a work-related integration (requiring relevant cognitive processing).  Even if it’s conceptual material, I want learners to be doing this, and of course the main focus is on real contextualized practice.

I’m pushing a very activity-based pedagogy (and curriculum). Yes, the tasks are designed, but they’re expected to take some responsibility for processing the information to produce outputs. The longer term goal is to increase the challenge and variety as we go through the curriculum, developing learner’s ability to  learn to learn and ability to adapt as well. Make sense?

Types and proportions of learning activities?

30 September 2014 by Clark Leave a Comment

I’ve been on quite the roll of late, calling out some bad practices and calling for learning science. And it occurs to me that there could be some pushback.  So let me be clear, I strongly suggest that the types of learning that are needed are not info dump and knowledge test,  by and large.  What does that mean? Let’s break it down.

First, let me suggest that what’s going to make a difference to organizations is not better fact-remembering. There are times when fact remembering is needed, such as medical vocabulary (my go-to example). When that needs to happen, tarted up drill-and-kill (e.g .quiz show templates, etc)  are the way to do it.   Getting people to remember rote facts or arbitrary things (like part names) is very difficult. And largely unnecessary if people can look it up, e.g. the information is  in the world  (or can be).  There are some things that need to be known cold, e.g. emergency procedures, hence the tremendous emphasis on drills  in aviation and  the military. Other than that, put it in the world, not the head.  Look up tables, info sheets, etc are the solution.    And I’ll argue that the need for this is less than 5-10% of the time.

So what  is  useful?  I’ll argue that what is useful is making better decisions.  That is, the ability to explain what’s happened and react, or predict what will happen and make the right choice as as consequence.  This comes from model-based reasoning.  What sort of learning helps model-based reasoning? Two types, in a simple framework. You need to process the models to help them be comprehended, and use them in context to make decisions with the consequences providing feedback.  Yes, there likely will be some content presentation, but it’s not  everything, and  instead is the core model with examples of how it plays out in context. That is, annotated diagrams or narrated animations for the models; comic books, cartoons, or videos for the examples.  Media, not bullet points.

The processing that helps make models stick includes having learners generate products: giving them data or outcomes and having them develop explanatory models. They can produce summary charts and tables that serve as decision aids. They can create syntheses and recommendations.  This really leads to internalization and ownership, but it may be more time-consuming than worthwhile. The other approach is to have learners make predictions using the models, explaining things.  Worst case, they can answer questions about what this model implies in particular contexts.  So this is a knowledge question, but not a “is this an X or a Y”, but rather “you have to achieve Z, would you use approach X, or approach Y”.

Most importantly, you need people to use the models to make decisions like they’ll be making in the workplace.  That means scenarios and simulations.  Yes, a mini-scenario of one question is essentially a multiple choice (though better written with a context and a decision), but really things tend to be bundled up, and you at least need branching scenarios. A series of these might be enough if the task isn’t too complex, but if it’s somewhat complex, it might be worth creating a model-based simulation and giving the learners lots of goals with it (read: serious game).

And, don’t forget, if it matters (and why are you bothering if it doesn’t), you need to practice until they can’t get it wrong.  And you need to be facilitating reflection.  The alternatives to the right answer should reflect ways learners often go wrong, and address them individually. “No, that’s not correct, try again” is a really rude way to respond to learner actions.  Connect their actions to the model!

What this also implies is that learning is much more practice than content presentation.  Presenting content and drilling knowledge (particularly in about an 80/20 ratio), is essentially a waste of time.  Meaningful practice should be more than half the time.  And you should consider putting the practice up front and driving them to the content, as opposed to presenting the content first.  Make the task make the content meaningful.

Yes, I’m making these numbers up, but they’re a framework for thinking. You should be having lots of meaningful practice.  There’s essentially  no role for bullet points or prose and simplistic quizzes, very little role for tarted up quizzes, and lots of role for media on the content side and  branching scenarios and model-driven interactions on the interaction side.  This kind of is an inverse of the tools and outputs I see.  Hence my continuing campaign for better learning.  Make sense?

Better Learning in the Real World

24 September 2014 by Clark 3 Comments

I tout the value of learning science and good design.  And yet, I also recognize that to do it to the full extent is beyond most people’s abilities.  In my own work, I’m not resourced to do it the way I would and should do it. So how  can we strike a balance?  I believe that we need to use  smart heuristics instead of the full process.

I have been  talking to a few  different people recently who basically  are resourced to do it the right way.  They talk about getting the  right  SMEs (e.g. with sufficient depth to develop models), using a cognitive task analysis process to get the objectives, align the processing activities to the type of learning objective, developing appropriate materials and rich simulations, testing the learning  and using  feedback to refine the product, all before final release.  That’s great, and I laud them.  Unfortunately, the cost to get a team capable of doing this, and the time schedule to do it right, doesn’t fit in the situation I’m usually in (nor most of  you).  To be fair, if it really matters (e.g. lives depend on it or you’re going to sell it), you really do need to do this (as medical, aviation, military training usually do).

But what if you’ve a team that’s not composed of PhDs in the learning sciences, your development resources are tied to the usual tools, your budgets far more stringent, and schedules are likewise constrained? Do you have to abandon hope?  My claim is no.

Law of diminishing returns curveI believe that a smart, heuristic approach is plausible.  Using  the typical ‘law of diminishing returns’ curve (and the shape of this curve is open to debate), I  suggest that it’s plausible that there is a sweet spot of design processes that gives you an high amount of value for a pragmatic investment of time and resources.  Conceptually, I believe you can get good outcomes with some steps that tap into the core of learning science without following the letter.  Learning is a probabilistic game, overall, so we’re taking a small tradeoff in probability to meet real world constraints.

What are these steps? Instead of doing a full cognitive task analysis, we’ll do our best guess of meaningful activities before getting feedback from the SME.  We’ll switch the emphasis from knowledge test to mini- and branching-scenarios for practice tasks, or we’ll have them take information resources and use them to generate work products (charts, tables, analyses) as processing.  We’ll try to anticipate the models,  and ask for misconceptions & stories to build in.    And we’ll align pre-, in-, and post-class activities in a pragmatic way.  Finally,  we’ll do a learning equivalent of heuristic evaluation, not do a full scientifically valid test, but we’ll run it by the SMEs and fix their (legitimate) complaints, then run  it with  some students and fix the observed  flaws.

In short, what we’re doing here are   approximations to the full process that includes some smart guesses instead of full validation.  There’s not the expectation that the outcome will be as good as we’d like, but it’s going to be a lot better than throwing quizzes on content. And we can do it with a smart team that aren’t learning scientists  but are informed, in a longer but still reasonable schedule.

I believe we can create transformative learning under real world constraints.  At least, I’ll claim this approach is far more justifiable than the too oft-seen approach of info dump and knowledge test. What say you?

Design like a pro

23 September 2014 by Clark 2 Comments

In other fields of endeavors, there is a science behind the approaches.  In civil engineering, it’s the properties of materials.  In aviation, it’s aeronautical engineering.  In medicine, it’s medical science.  If you’re going to be a professional in your field, you have to know the science.  So, two questions: is there a science of learning, and is it used.  The answers appear to be yes and no.  And yet, if you’re going to be a learning designer or engineer, you should know the science and be using it.

There is a science of learning, and it’s increasingly easy to find.  That’s the premise behind the Serious eLearning Manifesto, for instance (read it, sign it, use it!).  You could read Julie Dirksen’s  Design for How People Learn  as a very good interpretation of the science.  The Pittsburgh Science of Learning Center is compiling research to provide guidance about learning if you want a fuller scientific treatment.  Or read Bransford, et al’s summary of the science of  How People Learn,  a very rich overview.  And Hess & Saxberg’s recent  Breakthrough Leadership in the Digital Age: Using Learning Science to Reboot Schooling  is both a call for why and some guidance on how.

Among the things we know are that  rote and abstract information isn’t retained, knowledge test doesn’t mean ability to do, getting it right once doesn’t mean it’s known, the list goes on.  Yet, somehow, we see elearning tools like ‘click to learn more’ (er, less), tarted up quiz show templates to drill knowledge, easy ways to take content and add quizzes to them, and more.  We see elearning that’s arbitrary info dump and simplistic knowledge test.  Which will have a negligible impact on anything meaningful.

We’re focused on speed and cost efficiencies, not on learning outcomes, and that’s not professional.  Look, if you’re going to do design, do it right.    Anything less is really  malpractice!

Learning in 2024 #LRN2024

17 September 2014 by Clark 1 Comment

The eLearning Guild is celebrating it’s 10th year, and is using the opportunity to reflect on what learning will look like 10 years from now.  While I couldn’t participate in the twitter chat they held, I optimistically weighed in: “learning in 2024 will look like individualized personal mentoring via augmented reality, AI, and the network”.  However, I thought I would elaborate in line with a series of followup posts leveraging the #lrn2024 hashtag.  The twitter chat had a series of questions, so I’ll address them here (with a caveat that our learning really hasn’t changed, our wetware hasn’t evolved in the past decade and won’t again in the next; our support of learning is what I’m referring to here):

1. How has learning changed in the last 10 years (from the perspective of the learner)?

I reckon the learner has seen a significant move to more elearning instead of an almost complete dependence on face-to-face events.  And I reckon most learners have begun to use technology in their own ways to get answers, whether via the Google, or social networks like FaceBook and LinkedIn.  And I expect they’re seeing more media such as videos and animations, and may even be creating their own. I also expect that the elearning they’re seeing is not particularly good, nor improving, if not actually decreasing in quality.  I expect they’re seeing more info dump/knowledge test, more and more ‘click to learn more‘, more tarted-up drill-and-kill.  For which we should apologize!

2.  What is the most significant change technology has made to organizational learning in the past decade?

I reckon there are two significant changes that have happened. One is rather subtle as yet, but will be profound, and that is the ability to track more activity, mine more data, and gain more insights. The ExperienceAPI coupled  with analytics is a huge opportunity.  The other is the rise of social networks.  The ability to stay more tightly coupled with colleagues, sharing information and collaborating, has really become mainstream in our lives, and is going to have a big impact on our organizations.  Working ‘out loud’, showing our work, and working together is a critical inflection point in bringing learning back into the workflow in a natural way and away from the ‘event’ model.

3.  What are the most significant challenges facing organizational learning today?

The most significant change is the status quo: the belief that an information oriented event model has any relationship to meaningful outcomes.  This plays out in so many ways: order-taking for courses, equating information with skills, being concerned with speed and quantity instead of quality of outcomes, not measuring the impact, the list goes on.   We’ve become self-deluded that an LMS and a rapid elearning tool means you’re doing something worthwhile, when it’s profoundly wrong.  L&D needs a revolution.

4.  What technologies will have the greatest impact on learning in the next decade? Why?

The short answer is mobile.  Mobile is the catalyst for change. So many other technologies go through the hype cycle: initial over-excitement, crash, and then a gradual resurgence (c.f. virtual worlds), but mobile has been resistant for the simple reason that there’s so much value proposition.  The cognitive augmentation that digital technology provides, available whenever and wherever you are clearly has benefits, and it’s not courses!  It will naturally incorporate augmented reality with the variety of new devices we’re seeing, and be contextualized as well.  We’re seeing a richer picture of how technology can support us in being effective, and L&D can facilitate these other activities as a way to move to a more strategic and valuable role in the organization.  As above, also new tracking and analysis tools, and social networks.  I’ll add that simulations/serious games are an opportunity that is yet to really be capitalized on.  (There are reasons I wrote those books :)

5.  What new skills will professionals need to develop to support learning in the future?

As I wrote  (PDF), the new skills that are necessary fall into two major categories: performance consulting and interaction facilitation.  We need to not design courses until we’ve ascertained that no other approach will work, so we need to get down to the real problems. We should hope that the answer comes from the network when it can, and we should want to design performance support solutions  if it can’t, and reserve courses for only when it absolutely has to be in the head. To get good outcomes from the network, it takes facilitation, and I think facilitation is a good model for promoting innovation, supporting coaching and mentoring, and helping individuals develop self-learning skills.  So the ability to get those root causes of problems, choose between solutions, and measure the impact are key for the first part, and understanding what skills are needed by the individuals (whether performers or mentors/coaches/leaders) and how to develop them are the key new additions.

6.  What will learning look like in the year 2024?

Ideally, it would look like an ‘always on’ mentoring solution, so the experience is that of someone always with you to watch your performance and provide just the right guidance to help you perform in the moment and develop you over time. Learning will be layered on to your activities, and only occasionally will require some special events but mostly will be wrapped around your life in a supportive way.  Some of this will be system-delivered, and some will come from the network, but it should feel like you’re being cared for  in the most efficacious way.

In closing,  I note that, unfortunately,my Revolution book and the Manifesto were both driven by a sense of frustration around the lack of meaningful change in L&D. Hopefully, they’re riding or catalyzing the needed change, but in a cynical mood I might believe that things won’t change near as much as I’d hope. I also remember a talk (cleverly titled:  Predict Anything but the Future  :) that said that the future does tend  to come as an informed basis would predict  with an unexpected twist,  so it’ll be interesting to discover what that twist will be.

On the Road Fall 2014

16 September 2014 by Clark Leave a Comment

Fall always seems to be a busy time, and I reckon it’s worthwhile to let you know where I’ll be in case you might be there too! Coming up are a couple of different  events that you might be interested in:

September 28-30 I’ll be at the Future of Talent retreat   at the Marconi Center up the coast from San Francisco. It’s a lovely spot with a limited number of participants who will go deep on what’s coming in the Talent world. I’ll be talking up the Revolution, of course.

October 28-31 I’ll be at the eLearning Guild’s  DevLearn in Las Vegas (always a  great event; if you’re into elearning you  should be there).  I’ll be running a Revolution workshop  (I believe there are still a few spots), part of  a mobile panel, and talking  about how we are going about addressing the challenges of learning design at the Wadhwani Foundation.

November 12-13 I’ll be part of the mLearnNow event in New Orleans (well, that’s what  I call it, they call it LearnNow mobile blah blah blah ;).  Again, there are some slots still available.    I’m honored to be co-presenting with  Sarah Gilbert and Nick Floro  (with Justin Brusino pulling strings in the background), and we’re working hard to make sure it should be a really great deep dive into mlearning.  (And,  New Orleans!)

There may be one more opportunity, so if anyone in Sydney wants to talk, consider Nov 21.

Hope to cross paths with you at one or more of these places!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.