Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: align

Revisiting personal learning

11 September 2018 by Clark 2 Comments

A number of years ago, I tried a stab at an innovation process. And I was reminded of it thinking about personal learning, and looked at it again. And it doesn’t seem to aged well. So I thought I’d revisit the model, and see what emerged. So here’s a mindmap of personal learning, and the associated thinking.

The earlier 5R model was based on Harold Jarche’s Seek-Sense-Share model, a deep model that has many rich aspects. I had reservations about the labels, and I think it’s sparse at either end.  (And, I worked to hard to try to keep it to ‘R’s, and  Reify just doesn’t work for me. ;)

Personal learning

In this new approach, I have a richer representation at either end. My notion of ‘seek’ (yes, I’m still using Harold’s framework, more at the end) has three different aspects. First is ‘info flows’. This is setting up the streams you will monitor. They’re filters on the overwhelming overload of info available. They’re your antenna for resonating with interesting new bits. You can also search for information, using DuckDuckGo or Google, or going straight to Wikipedia or other appropriate information resources you know. And, of course, you can ask, using your network, or Quora, or in any social media platform like LinkedIn or Facebook.  And there’re are different details in each.

To make sense of the information, you can do either or both of representing your understanding and experimenting. Representing is a valuable way to process what you’re hearing, to make it concrete. Experimenting is putting it to the test. And you naturally do both; for instance read a web page telling you how to do something that’s new, and you put it into practice and see if it works. Both require reflection, but getting concrete in trying it out or rendering it is valuable. Again, representing and experimenting break down into further details.

What you learn can (and often should) be shared. At whatever stage you’re at, there’s probably someone who would benefit from what you’ve learned.  You can post it publicly (like this blog), or circulate it to a well-selected set of individuals (and that can range from one other person to a small group or some channel that’s limited).  Or you can merely have it in readiness so that if someone asks, you can point them to your thoughts. Which is different than pointing them to some other resource, which is useful, but not necessarily learning. The point is to have others providing feedback on where you’re at.

I looked at Harold’s model more deeply after I did this exercise (a meta-learning on it’s own; take your own stab and then see what others have done).  I realize mine is done on sort of a first-principles basis from a cognitive perspective, while his is richer, being grounded in others’ frameworks. Harold’s is also more tested, having been used extensively in his well-regarded workshop.

I note that part of the meta-learning here is the ongoing monitoring of your own processes (the starred grey clouds). This is a key part of Harold’s workshop, by the way. Looking at your processes and evaluating them. An early exercise where you evaluate your own network systematically, for instance, struck me as really insightful. I’m grateful he was willing to share his materials with me.

So, this has been my sensing and sharing, so I hope you’ll take the opportunity to provide feedback!  What am I missing?

 

 

Are Decisions the Key?

4 September 2018 by Clark 2 Comments

A number of years ago, now, Brenda Sugrue posited that Bloom’s Taxonomy was wrong. And, she proposed a simpler framework. I’ve never been a fan of Bloom’s; folks have trouble applying it systematically (reliably discriminating between various levels). And, while it pushed for higher levels, it left people off the hook if they decided a lower level would do it. Sugrue first proposed a simpler taxonomy, and also an alternative that was just performance. In her later version, she’s aligned the former to the work of the Science of Learning Center’s KLI (knowledge-learning-instruction) framework. But I want to go back to her ‘pure performance’ model, and make a the case that decisions are key, that they  are necessary but also sufficient.

So her latest model discriminates between concept, process, fact, principle, etc.  And, I would agree, there are likely different pedagogies applied to each.  Is that a basis enough?  Let me suggest a different approach, because I don’t see how they differ in one meaningful way. For each, you need to take some action, whether it’s to:

  • classify as a fact (is it a this or a that)
  • perform the steps (which action to take now)
  • trouble shoot the process (what experiment now)
  • predict an outcome (what will happen)

Note, however, that for each, there’s an associated decision. And that, to me, is core.  Now, I’m not claiming that they all require the same approach.  For instance, to help people deal with ambiguous decisions, I suggested a collaborative approach to discuss the parameters and unpack the thinking. To teach trouble-shooting, I would give some practice making conceptual decisions about the systems that could cause the observed symptoms. In internal combustion engines (read: cars), if it’s not running, is the air/fuel system or the electricity? How could you narrow that down?  In a diesel, you could eliminate the electrical ;).

Van Merriënboer, in his Four Component Instructional Design, talks about the knowledge you need and the complex decisions you apply that to. I agree, and so it’s not  just  about decisions. However, even the knowledge needs to be applied to stick.  To test that learners have acquired the underpinning knowledge, you can hav them exercising the models in decisions.

Ok, so you might want to short-circuit the mapping from decision to practice. I think a good heuristic (ala Cathy Moore’s Action Mapping) is just to have them do what they need to do, and give them the necessary information. However, if you want to create a ‘cheat sheet’ to accelerate performance and success, with learning goals and associated pedagogies, I won’t quibble.

Now, you can’t provide all the situations, so you need to choose the right ones that will help facilitate abstraction and transfer. You may need to also ensure that they know the requisite information, so you may need to determine that, but I think exercising the models in simpler situations helps develop them more than just a presentation.

I’m suggesting that focusing radically on decisions is the best way to work with SMEs, and is the best guide for designing practice (e.g. put learners in situations to make decisions). Everything else revolves around that. Now, are these categories reliable  types of decisions?  Will ponder. Your thoughts?

Transparency isn’t enough

30 August 2018 by Clark Leave a Comment

wet foggy windowOf late, there has been a number of articles talking about thinking and mental models (e.g. this one). One of the outcomes is that we have a lot of stories about how the world works.  Some of them are accurate. Others, not. And pondering this when I should’ve been sleeping, I realized that there was a likelihood that our misinterpretations could cause problems. It made me think that maybe transparency isn’t enough. What does that mean?

We build models, period. We create explanations about how the world works. And they may not be right.  If we aren’t given good ones up front, it’s likely. It’s also the case that they seem to come from previous models we’ve seen. (And diagrams. ;)

Now, it’s easy to misattribute an outcome to the wrong model if we don’t have better explanations. And this comes into play when we’re trying to figure out what has happened, or why something happened. This includes decisions made by others that may affect us, or even just lead to outcomes such as product designs, policies, or more.

Where I’m going is this: if we don’t see the thinking that explains how we got there, not just the process followed, we can infer wrongly about  why it happened. And this is important in the ‘show your work’ sense.

I’m a fan of transparency. I like it when politics and other decisions are scrutable; we can see who’s making the decision, what influences they’ve had, what steps they took to get there. That’s not enough, however. Particularly when you disagree or have a problem. Take LinkedIn, for example; when I connect to someone using the app on the qPad, I can then send them a message, but when I do it through the web interface on my computer, it wants to use one of those precious ‘InMail’s.  It’s inconsistent (read: frustrating). Is there a rationale?

So I’m going to suggest that just transparency is necessary, but not sufficient. You can’t just show your work, you need to show your thinking. You need to see the rationale!  Two reasons: you can learn more when you see the associated cogitation, and you can provide better feedback as well.  In short, we want to see  why they believe this is the right solution. Otherwise, we could question their decision because we misattribute the reasoning.

Transparency is great, but if you can’t see the thinking behind it, you can make wrong inferences.  It’s better if you can see the thinking  and the result. Is this transparent enough on both?

Realities: Why AR over VR

29 August 2018 by Clark 3 Comments

In the past, I’ve alluded to why I like Augmented Reality (AR) over Virtual Reality. And in a conversation this past week, I talked about realities a bit more, and I thought I’d share. Don’t get me wrong, I like VR  alot, but I think AR has the bigger potential impact.  You may or may not agree, but here’s my thinking.

In VR, you create a completely artificial context (maybe mimicking a real one).  And you can explore or act on these worlds. And the immersiveness has demonstrably improved outcomes over a non-immersive experience.  Put to uses for learning, where the affordances are leveraged appropriately, they can support  deep practice. That is, you can minimize transfer to the real world, particularly where 3D is natural. For situations where the costs of failure are high (e.g. lives), this is  the best practice before mentored live performance. And, we can do it for scales that are hard to do in flat screens: navigating molecules or microchips at one end, or large physical plants or astronomical scales at the other. And, of course, they can be completely fantastic, as well.

AR, on the other hand, layers additional information on  top of our existing reality. Whether with special glasses, or just through our mobile devices, we can elaborate on top of our visual and auditory world.  The context exists, so it’s a matter of extrapolating on it, rather than creating it whole. On the other hand, recognizing and aligning with existing context is hard.  Yet, being able to make the invisible visible where you already are, and presumably are for a reason that makes it intrinsically motivating, strikes me as a big win.

First, I think that the learning outcomes from VR are great, and I don’t mean to diminish them. However, I wonder how general they are, versus being specific to inherently spatial, and potentially social, learning.  Instead, I think there’s a longer term value proposition for AR. There’s less physical overhead in having your world annotated versus having to enter another one. While I’m not sure which will end up having greater technical overhead, the ability to add information to a setting to make it a learning one strikes me as a more generalizable capability.  And I could be wrong.

Another aspect is of interest to me, too. So my colleague was talking about mixed reality, and I honestly wondered what that was. His definition sounded like  alternate reality, as in alternate reality games. And that, to me, is also a potentially powerful learning opportunity. You can create a separate, fake but appearing real, set of experiences that are bound by story and consequences of action that can facilitate learning. We did it once with a sales training game that intruded into your world with email and voicemail. Or other situations where you have situations and consequences that intrude into your world and require decisions and actions. They don’t have  real consequences, but they do impact the outcomes. And these could be learning experiences too.

At core, to me, it’s about providing either deep practice or information at the ‘teachable moment’. Both are doable and valuable. Maybe it’s my own curiosity that wants to have information on tap, and that’s increasingly possible. Of course, I love a good experience, too. Maybe what’s really driving me is that if we facilitate meta-learning so people are good self-learners, having an annotated world will spark more ubiquitous learning. Regardless, both realities are good, and are either at the cusp or already doable.  So here’s to real learning!

User-experienced stories

15 August 2018 by Clark Leave a Comment

Yesterday I wrote about examples as stories. And I received a comment that prompted some reflection. The comment suggested that scenarios were stories too. And I agree!  They’re not examples, but they  are stories. With a twist.

So, as I’ve said many times, simulations are just a manipulable model of the world. And a motivated, self-capable learner  can learn from them. But motivated and self-capable isn’t always a safe bet. So, instead, we put the simulation in an initial state, and ask the learner to take it to a goal state, and we choose those such that they can’t get there until they learn the relationships we want them to understand. That’s what I call a scenario.  And we can tune those into a game. (Yes, we turn them into games by tuning; making the setting compelling, adjusting the challenge, etc.)

Now, a scenario needs a number of things. It needs a context, a setting. It needs a goal, a situation to be achieved. And, I’ll suggest, it should also have a reason for that goal to make sense. If you see the alignment that says why games  should be hard fun, you’ll see that making it meaningful is one of the elements. And that,  I say, is a story. Or, at least, the beginning of one.

In short, a story has a setting, a goal, and a path to get there. We remove boring details, highlight the tension, etc.  We flesh out a setting that the learner cares about, provide a sense of urgency, and enable the goal achievement.  But it’s not all done.

The reason this isn’t a complete story is we don’t know the path the protagonist uses to accomplish the goal, or ultimately doesn’t.  We’ve provided tools for that to happen, but we, as designers, don’t control the protagonist. The learner, really,  is the the protagonist!

What I’m talking about is that the story, certainly for the learner, is co-created between the world we’ve developed, and their use of the options or choices we provide. Together, a story is written for them by us  and them.  And, their decisions and the feedback are the story  and the learning!  It’s, voilà, a learning  experience.

Learning is powerful. Creating experiences that facilitate learning are creative hard fun for the designer, and valuable hard fun for the learner. Learning is about stories, some told, some c0-created, but all valuable.

Designing with science

17 July 2018 by Clark 1 Comment

How should we design? It’s all well and good to spout principles, but putting them into practice is another thing. While we always would like to follow learning science, there’re not always all the answers we need. I was thinking about this with a project I’m working on, and it occurred to me that there might be some confusion. So I thought I’d share how I like to think and go about it, and see what you think.

So, first of all, you should go with the science. There are good principles around in a variety of forms.  Some good guidance comes in books such as:

  • eLearning & the Science of Instruction (Clark & Mayer)
  • Design for How People Learn (Dirksen)
  • the Make it Learnable series (Shank)
  • and less directly but no less applicably, Michael Allen’s Guide to eLearning

There’s also ATD’s Science of Learning topic (with some good and some less good stuff).  And the 3 Star Learning site. Both of these, of course, aren’t as comprehensive as a book.   And, of course, you can also go right to the pure journals, like Instructional Science, and Learning Sciences, and the like, if you are fluent in academese.  For that matter, I’ve a video course that is about Deeper Instructional Design, e.g. a design approach with learning science ‘baked in’.

But what I was thinking of what happens when they don’t address the specific concern you are wondering about. The second approach I recommend is theory. In particular,  Cognitive Apprenticeship (my favorite model; Collins & Brown), or other theories like Elaboration Theory (Reigeluth), Pebble in a Pond (Merrill), or 4 Component ID (Van Merriënboer). Or, arguably more modern, something from Jonassen on problem-based learning or other more social constructivist approaches.  They’re based on empirical data, but pulled together, and you can often make inferences in between the principles.  While the next step is arguably better, in the real world you want a scrutable approach but one that gets you moving forward the fastest.

Finally, you test. If science and theory can’t provide the answer, you either wing it, but it’s better if you set up an experiment. Ideally, with your sample population.  So, for instance, you don’t know whether to place the learner’s role in the simulation game as a consultant to many orgs or as a role in one org with many situations. There’re tradeoffs: in the former it’s easier to provide multiple contexts for practice, but the latter may be more closely aligned with job performance.   You can test it, and see what learners think about the experience. Of course, it may be that in the process of just designing both that you have some insight. And that’s ok.

And, if you’re a reflective practitioner (and we should be), you might share your findings.  What did you learn?  Learning science advances to the extent that we continue to explore and test.  Speaking of which, how does this approach match with what you do?

Organizational Psychology?

13 July 2018 by Clark 1 Comment

I read an article calling for organizational psychology and the things these folks do for companies.  And, interestingly, many of the tasks seem like things that I’ve been calling for L&D to do. So now I have to ask what’s the relationship between these two areas?

My background  is psychology, specifically the cognitive kind (ok more cog sci than just psych, but still).  And so I’ve been pushing the idea of doing a cognitive analysis of organizations, and incorporating new understandings of cognition in how we run our companies, and more. The point being that we need to align how our organizations operate with how our brains do.

In a sense, then, I’m arguing for a psychological approach to organizations. This includes best principles across the board: working together, learning alone, etc. Yet, I’m typically talking to and about Learning & Development (even when I argue it needs a revolution).  Am I missing the forest for the trees?

Now, it’s clear that the formal role of organizational psychology is bigger. It’s about hiring, and incentives, and occupational stress and a number of other things that I normally don’t consider.  And, it doesn’t seem to be much about technology, the approaches to innovation seem limited, and some of the things it investigates seem more like outcomes.  Yet it also includes training & workforce development, culture, and more.

I also have to say that it’s history seems to be in behavioral psychology. It appears (on the surface, mind you) to be a bit mired in thinking linearly, not networked. Of course, I’m probably biased here, and this is true for L&D too!  There’re probably pockets of modernity as well.

So is L&D a subset? I really don’t know.  I’d like to hear what you have to say on it.  Perhaps my arguments really are (cognitive) organizational psychology.  In another sense, I’m not sure it’s important. It’s not so much where you come from as what you are about, and the methods you use.  Still, this is a question I’d like to hear thoughts on. Is there a definitive answer?

Why L&D Should Lead

10 July 2018 by Clark 1 Comment

So, I’ve seen a bright future for L&D. It’s possible, and desirable.  But is it defensible?  I want to suggest that it is.  L&D  should be the business unit with the best understanding of our brains (except, perhaps, in a neurology company, e.g. medical, or a cognitive company, e.g. AI).  And I’ve argued that’s a key role. So, if we grasp that nettle and lead the change, we could and should be leading the way to a brighter new future for organizational success.

Look, cognitive science is somewhat complex. In fact, the human brain is arguably the most complex thing in the known universe!  However, we have a good understanding of cognition for the purposes of guiding learning and performance in the workplace. Or, as I like to say, understanding how we think, work, and learn.  Moreover, we really can’t (and shouldn’t) be doing our jobs unless we have that knowledge. (I have a workshop that can help. ;)

Now, it’s also becoming a cliche that the organizations that learn fastest will be the ones that thrive (not just survive, or not!). We must learn, individually and together. And knowing how to have people work and play well together, representing, reflecting, collaborating, and more  should be L&D’s role. We should be the ones who know the most and best about how to do those things in consonance with how our cognitive architecture works.

And, to be clear, there are lots of practices in organizations that are contrary to the best learning. Fear, lack of time for reflection, micro-management, old-school brainstorming, the list goes on. Without knowledge, we may firmly be convinced we’re doing it right, and instead undermining the best outcomes!  (One way to tell if it’s safe to share in your org: put in a social network. If no one participates…)  On the flip side, there are lots of practices that science tells us work. Details around formal learning, creating spaces for informal learning, practices for short-term and long-term innovation, etc.

We have an uphill battle gaining the credibility we need, but I say start now, and start small. Instill the practices within L&D, take ownership of the necessary skills and knowledge, make it work, document it, and then use that success as a stepping stone to spread the word.

Then, if we  are doing that facilitation of learning, you should be able to see that we are enabling the most important work in the organization!  We can be the key to org success, going forward. L&D should lead the change. That’s the vision I see, at least.  Does this sound good and make sense to you?

 

Silly Design

3 July 2018 by Clark Leave a Comment

Time for a brief rant on interface designs.  Here we’re talking about two different situations, one device, and and one interface.  And, hopefully, we can extract some lessons, because these are just silly design decisions.

OXO timerFirst up is our timer.  And it’s a good timer, and gets lots of use. Tea, rice, lots of things. And, sometimes, a few things at a time.  As you can see, there’re 3 timers. And, as far as I know, we’ve only used at most two at a time.  So what’s the problem?

Well, there’re different beeps signaling the end of different timers. And that’s a good thing. Mostly.  But there’s one very very silly design decision here. Let me tell you that one has one beep, one has two beeps, and one has three. So, guess which number of beeps goes to which timer?  You can see they’re numbered…

Got your guess? It’d be sensible, of course, if the one beep went with the first timer, and two beeps went with the second. But you  know we’re not going there!  Nope, the first timer has two beeps. The second timer has 3 beeps. And the 3rd timer, of course, has one.

It’s a principle called ‘mapping’ (see Don Norman’s essential reading for anyone who designs for people:  The Design of Everyday Things). In it, you make the mapping logical, so for instance between the number of the timer and the number of beeps.  How could you get this wrong? (Cliche cue: you had  one job…)

iTunes way 1On to our second of today’s contestants, the iTunes interface.  Now, everyone likes to bash iTunes, and either it’s a bad design for what it’s doing, or it shouldn’t be trying to do too many things. I’m not going there today, I’m going off on something else.

I’ve always managed the files on the qPad through iTunes. It used to be straightforward, but they changed it. Of course.  There’re also more ways to do it: AirDrop & iFiles being two. And, frankly, they’re both somewhat confusing.  But that’s not my concern today.  The new way I use is only a slight modification on the old way, which is  why I use it. And it works. But there’s a funny little hiccup…

So, there are two ways to bring up a list of things on your iPad.  For one, you select it from the device picture at the top (to the right of the forward/back arrows), and you see a list of things you can access/adjust: music, movies, etc. As you see to the left.

other way to access iPadOn the other hand (to the right), you select it from a list of devices, and you get the drop down you see to the right.  Note that the lists aren’t the  same.

Wait, they’re not the same?  No, only one has “File Sharing”!  So, you have to remember which way to access the device before you can choose to add a file.  This is just silly!  Only recently have I started remembering which way works (bad design, BTW, trusting to memory), and before that I had to explore. It’s not much, just an extra click, but it’s unnecessary memory load.

The overhead isn’t much, to be clear, but it’s still irritating. Why, why would you have two different ways to access the device, and not have the same information come up?  It’s just silly!  Moreover, it violates a principle. Here, the principle is consistency (and, arguably, affordances). When you access a device, you expect to be able to manipulate the device. And you don’t expect that two different ways to get to what should be the same place would yield two different suites of information. (And don’t even get me started about the stupid inconsistencies between the mobile and web app versions of LinkedIn!)

At least if you haven’t communicated a clear model about why the one way is different than the other. But it’s  not there.  It’s a seemingly arbitrary list. We operate on models, but there’s no obvious way to discriminate between these two, so the models are random. Choosing the device, either way, is supposed to access the device.  That’s the affordance.  Unless you convey clearly  why these are different.

This holds for learning too. Interface folks argued that Gloria Gery’s Electronic Performance Support Systems were really making up for bad design. And so, too, is much training. Don argued in his  The Invisible Computer that UI should be up front in product design, because they could catch the design decisions that would make it more difficult to use. I want to argue that it’s the same with the training folks: they should be up front in product  or service design to catch decisions that will confuse the audience and require extra support costs.

Design, learning or product/service, works best when it aligns with how our brains work.  If we own that knowledge, we can then lobby to apply it, and help make our organizations more successful.  If we can make happier users, and less support costs, we should. And as Kathy Sierra suggests, really it’s  all about learning.

 

Microlearning Malarkey

27 June 2018 by Clark 7 Comments

Someone pointed me to a microlearning post, wondering if I agreed with their somewhat skeptical take on the article. And I did agree with the skepticism.  Further, it referenced another site with worse implications. And I think it’s instructive to take these apart.  They are emblematic of the type of thing we see too often, and it’s worth digging in. We need to stop this sort of malarkey. (And I don’t mean microlearning as a whole, that’s another issue; it’s articles like this one that I’m complaining about.)

The article starts out defining microlearning as small bite-sized chunks. Specifically: “learning that has been designed from the bottom up to be consumed in shorter modules.” Well, yes, that’s one of the definitions.  To be clear, that’s the ‘spaced learning’ definition of microlearning. Why not just call it ‘spaced learning’?  

It goes on to say “each chunk lasts no more than five-then minutes.” (I think they mean 10). Why? Because attention. Um, er, no.  I like JD Dillon‘s explanation:  it needs to be as long as it needs to be, and no longer.

That attention explanation?  It went right to the ‘span of a goldfish’. Sorry, that’s debunked (for instance, here ;).  That data wasn’t from Microsoft, it came from a secondary service who got it from a study on web pages. Which could be due to faster pages, greater experience, other explanations. But not a change in our attention (evolution doesn’t happen that fast and attention is too complex for such a simple assessment).  In short, the original study has been misinterpreted. So, no, this isn’t a good basis for anything having to do with learning. (And I challenge you to find a study determining the actual attention span of a goldfish.)

But wait, there’s more!  There’s an example using the ‘youtube’ explanation of microlearning. OK, but that’s the ‘performance support’ definition of microlearning, not the ‘spaced learning’ one. They’re two different things!  Again, we should be clear about which one we’re talking about, and then be clear about the constraints that make it valid. Here? Not happening.  

The article goes on to cite a bunch of facts from the Journal of Applied Psychology. That’s a legitimate source. But they’re not pulling all the stats from that, they’re citing a secondary site (see above) and it’s full of, er, malarkey.  Let’s see…

That secondary site is pulling together statistics in ways that are  thoroughly dubious. It starts citing the journal for one piece of data, that’s a reasonable effect (17% improvement for chunking). But then it goes awry.  For one, it claims playing to learner preferences is a good idea, but the evidence is that learners don’t have good insight into their own learning. There’s a claim of 50% engagement improvement, but that’s a mismanipulation of the data where 50% of people would like smaller courses. That doesn’t mean you’ll get 50% improvement. They also make a different claim about appropriate length than the one above – 3-7 minutes – but their argument is unsound too. It sounds quantitative, but it’s misleading. They throw in the millennial myth, too, just for good measure.

Back to the original article, it cites a figure not on the secondary site, but listed in the same bullet list: “One minute of video content was found to be equal to about 1.8 million written words”.  WHAT?  That’s just ridiculous.  1.8 MILLION?!?!?  Found by who?  Of course, there’s no reference. And the mistakes go on. The other two bullet points aren’t from that secondary site either, and also don’t have cites.  The reference, however could mislead you to believe that the rest of the statistics were also from the journal!

Overall, I’m grateful to the correspondent who pointed me to the article. It’s hype like both of these that mislead our field, undermine our credibility, and waste our resources. And it makes it hard for those trying to sell legitimate services within the boundaries of science.  It’s important to call this sort of manipulation out.  Let’s stop the malarkey, and get smart about what we’re doing and why.  

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.