Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: align

Coping with Change: A Book Review of Flux by April Rinne

9 September 2021 by Clark 1 Comment

How do we cope with change? There’s a myth that we resist change, but Peter de Jaeger busted that in a talk I heard where he pointed out that we make changes all the time. We get married, take a different job, have kids, all of which are changes. The difference is that these are changes we choose! However, in this era of increasing change, we’re likely going to face more and more changes we didn’t expect. Can we improve our ability for coping with change? Yes, says April Rinne in her book  Flux: 8 Superpowers for Thriving in Constant Change.

And  here’s a caveat: I am part of a  group she put together to talk about Flux while writing the book. I’m in the acknowledgements.

April, faced with a heavy unchosen change in her teens, carried that with her. It’s driven her interest in change and how we can learn to cope.  Given that we’re in an era of increasing change, she recognized that we would benefit from having some approaches to improve our reslience.  She looked at a wide variety of inputs, and has distilled her learnings into 8 mental frameworks that assist.

The underlying focus is on a flux  mindset, that is, a stance that change is coming and to be accepting, not resisting. The eight different ways of looking at the world are deliberately provocative, but also apt:

  • Run Slower
  • See What’s Invisible
  • Get Lost
  • Start with Trust
  • Know Your ‘Enough’
  • Create Your Portfolio Career
  • Be All the More Human
  • Let Go of the Future

Each gets a chapter, with illustrations of the challenge, and practical ways to enact. You may find, like I did, that some are familiar, others are more challenging. Each comes from either or both of ancient wisdom  and practical experience. The ones that were new I find to be all the more interesting. And useful!

That’s the real key. It’s very much aligned with what we know about how our brains work (a big issue with me, as this audience has probably learned ;). Some areas I feel like I’ve a handle on (e.g. run slower), and others are things are more challenging (e.g. see what’s invisible). There are bound to be areas of work for you. The upside of that work, however, is likely to be a better ability to ‘be’.

This is a book that you’ll want your loved ones to read, because what it provides aligns with a view of the world as it could and should be. It’s a guide for coping with change that addresses not only individuals, but organizations and society as a whole.  Highly recommended.

Iterating and evaluating

7 September 2021 by Clark Leave a Comment

Design cycleI’ve argued before about the need for evaluation in our work. This occurs summatively, where we’re looking beyond smile sheets to actually determine the impact of our efforts. However, it also should work formatively, where we’re seeing if we’re getting closer. Yet there are some ways in which we go off track. So I want to talk about iterating and evaluating our learning initiatives.

Let’s start by talking about our design processes. The 800 lb gorilla of ADDIE has shifted from a water flow model to a more iterative approach. Yet it still brings baggage. Of late, more agile and iterative approaches have emerged, not least Michael Allen’s SAM and Megan Torrance’s LLAMA. Agile approaches, where we’re exploring, make more sense when designing for people, with their inherent complexity.

Agile approaches work on the basis of creating, basically, Minimum Viable Products, and then iterating.  We evaluate each iteration. That is, we check to see what need to be improved, and what is good enough. However,  when are we done?

In my workshops, when talking about iteration, I like to ask the audience this question. Frequently, the answer is “when we run out of time and money”. That’s an understandable answer, but I maintain it’s the  wrong answer.

If we iterate until we run out of time and money, we don’t know that we’ve actually met our goals. As I explained about social media metrics, but applies here too, you  should be iterating until you achieve the metrics you’ve set. That means you know what you’re trying to do!

Which requires, of course, that you set metrics about what your solution should achieve. That could include usability and engagement (which come before and after, respectively), but most critically ‘impact’. Is this learning initiative solving the problem we’re designing it to achieve?  Which also means you need to have a discussion of why you’re building it, and how you know it’s working.

Of course, if you’re running out of time and money faster than you’re getting close to your goal, you have to decide whether to relax your standards, or apply for more resources, or abandon your work, or…but at least you’re doing so consciously. Yet this is still better than heuristically determining that three iterations is arbitrarily appropriate, for example.

I do recognize that this isn’t our current situation, and changing it isn’t easy. We’re still asked to make slide decks look good, or create a course on X, etc. Ultimately, however, our professionalism will ask us to do better. Be ready. Eventually, your CFO should care about the return on your expenditures, and it’ll be nice to have a real answer. So, iterating and evaluating  should  be your long term approach. Right?

More lessons from bad design

24 August 2021 by Clark 2 Comments

I probably seem like a crank, given the way I take things apart. Yet, I maintain there’s a reason beyond “get off my lawn!” I point out flaws not to complain, but instead to point to how to do it better. (At least, that’s my story and I’m sticking to it. ;) Here’s another example, providing more lessons from bad design.

In this case, I’ll be attending a conference and the providers have developed an application to support attendees. In general, I look forward to these applications. They provide ways to see who’s attending, and peruse sessions to set your calendar. There are also ways to connect to people. However, two major flaws undermine this particular instance.

The first issue is speed. This application is  slow! I timed it; 4 seconds to open the list of speakers or attendees. Similarly, I clicked on a letter to jump through the list of attendees. The amount of time it takes varied from 4 to 8 seconds. Jumping to the program took 6 seconds.

While that may seem short, compare that to most response times in apps. You essentially can’t time them, they’re so fast. More than a second is an era in mobile responsiveness. I suspect that this app is written as a ‘wrapped’ website, not a dedicated app. Which works sometimes, but not when the database is too big to be responsive. Or it could just be bad coding. Regardless, this is  basically unusable. So test the responsiveness before it’s distributed to make sure it’s acceptable. (And then reengineer it when it isn’t.)

That alone would be sufficient to discount this app, but there’s a second problem. Presumably for revenue reasons, there are ads that scroll across the top. Which might make sense to keep the costs of the app down, but there’s a fundamental problem with our visual architecture.

Motion in the periphery of our vision is distracting. That was evolutionarily adaptive, allowing us to detect threats from places that we weren’t focusing on. Yet, when it’s not a threat, and we  are trying to focus on something, it interferes. We learned about this in the days of web pages with animated gifs: you couldn’t process what you were there to consume!

In this app, the scrolling of the ads makes it more difficult to read the schedule, attendee lists, and other information. Thus, the whole purpose of the application is undermined. You could have static ads that are randomly attached to the pages you click on. The audience is likely to go to several pages, so all the ads will get seen. Having them move, however, to ensure that you see them all undermines the whole purpose of the app.

Oddly enough, there are other usability problems here. On the schedule, there’s a quick jump to times on a particular day. Though it stops at 2PM!?!? (The conference extends beyond that; my session’s at 4PM.) You’d think you could swipe to see later times on that ‘jump’ menu, but that doesn’t work. I can’t go farther, because the usability makes it too painful; we may miss more lessons from bad design.

Our cognitive architecture is powerful, but has limitations. Designing to work in alignment with our brains is a clear win; and this holds true for designing for learning as well as performance support. Heck, I’ve written a whole book  about how our minds work, just to support our ability to design better learning! Conflicting with our mental mechanisms is just bad design. My goal is that with more lessons in bad design, we can learn to do better. Here’s to good design!

My ‘Man on the Moon’ Project

20 July 2021 by Clark 8 Comments

There have been a variety of proposals for the next ‘man on the moon’ project since JFK first inspired us. This includes going to Mars, infrastructure revitalization, and more. And I’m sympathetic to them. I’d like us to commit to manufacturing and installing solar panels over all parking lots, both to stimulate jobs and the economy, and transform our energy infrastructure, for instance. However, with my focus on learning and technology, there’s another ‘man on the moon’ project I’d like to see.

I’d like to see an entire K12 curriculum online (in English, but open, so that anyone can translate it). However, there are nuances here. I’m not oblivious to the fact that there are folks pushing in this direction. I don’t know them all, but I certainly have some reservations. So let me document three important criteria that I think are critical to make this work (queue my claim “only two things wrong with education in this country, the curriculum and the pedagogy, other than that it’s fine”).

First, as presaged, it can’t be the existing curriculum.  Common Core isn’t evil, but it’s still focused on a set of elements that are out of touch. As an example, I’ll channel Roger Schank on the quadratic equation: everyone’s learned (and forgotten) it, almost no one actually uses it. Why? Making every kid learn it is just silly. Our curriculum is a holdover from what was stipulated at the founding of this country. Let’s get a curriculum that’s looking forward, not back. Let’s include the ability to balance a bankbook, to project manage, to critically evaluate claims, to communicate visually, and the like.

Second, as suggested, it can’t be the existing pedagogy. Lecture and test don’t lead to retaining and transferring the ability to  do. Instead, learning science tells us that we need to be given challenging problems, and resources and guidance to solve them. Quite simply, we need to practice as we want to be able to perform. Instruction is designed action and  guided reflection.  Ideally, we’d layer on learning on top of learner interests. Which leads to the third component.

We need to develop teachers who can facilitate learning in this new pedagogy. We can’t assume teachers can do this. There are many dedicated teachers, but the system is aligned against effective outcomes. (Just look at the lack of success of educational reform initiatives.) David Preston, with his Open Source Learning has a wonderful idea, but it takes a different sort of teacher. We also can’t assume learners sitting at computers. So, having a teacher support component along with every element is important.

Are there initiatives that are working on all this? I have yet to see any one that’s gotten it  all right.  The ones I’ve seen lack on one or another element. I’m happy to be wrong!

I also recognize that agreeing on all the elements, each of which is controversial, is problematic. (What’s the  right curricula? Direct instruction or constructivist? How do we value teachers in society?) We’d have major challenges in assembling folks to address any of these, let alone all and achieving convergence.

However, think of the upside. What could we accomplish if we had an effective education system preparing youth for the success of our future? What  is the best investment in our future?  I realize it’s a big dream; and I’m not in a position to make it happen. Yet I did want to drop the spark, and see if it fire any imaginations. I’m happy to help. So, this is my ‘man on the moon’ project; what am I missing?

Jay Cross Memorial Award 2021: Sahana Chattopadhyay

5 July 2021 by Clark 1 Comment

Jay Cross was a deep thinker and a man of many talents, never resting on his past accomplishments.  Following his death in November 2015, the partners of the Internet Time Alliance — Jane Hart, Charles Jennings, Harold Jarche, and myself — resolved to continue Jay‘s work. The Internet Time Alliance Award, in memory of Jay Cross, is an annual presentation. We award it to a workplace learning professional who has contributed in positive ways to the field of Informal Learning. The  Jay Cross Memorial Award  is one way to keep pushing our professional fields and industries to find new and better ways to learn and work.

Recipients champion workplace and social learning practices inside their organization and/or on the wider stage. They share their work in public and often challenge conventional wisdom. We look for professionals who are convincing and effective advocates of a humanistic approach to workplace learning and performance. Recipients also continuously welcome challenges at the cutting edge of their expertise.

We announce the award on 5 July, Jay‘s birthday. The Internet Time Alliance Jay Cross Memorial Award recipient for 2021 is Sahana Chattopadhyay.

Sahana is the founder of Proteeti — a Sanskrit word meaning learning that transforms — which describes the spirit of the award. She has written extensively about learning and development and has been active on social media for many years.

I first met Sahana through #lrnchat, and she maintained a steady support of Jay and the Internet Time Alliance‘s work. She‘s continued to be a voice for making sense of an uncertain world, which overlaps substantially with some of our own work.  

At her site, she talks about moving to “a world where many worlds fit” through acceptance of others, interconnection, and living with emergence. She applies these principles to organizations and leaders to facilitate shifting to more effective and humane ways of being.

As a vocal advocate for mindsets that unleash possibility, Sahana embodies the ideals Jay Cross worked towards. We‘re honored to be able to recognize her work through the Jay Cross Memorial Award.

Doing Gamification Wrong

22 June 2021 by Clark 8 Comments

roulette wheelAs I’ve said before, I’m not a fan of ‘gamification’. Certainly for formal learning, where I think intrinsic motivation is a better area to focus on than extrinsic. (Yes, there are times it makes sense, like tarting up rote memory development, but it’s under-considered and over-used.)  Outside of formal learning, it’s clear that it works in certain places. However, we need to be cautious in considering it a panacea. In a recent instance, I actually think it’s definitely misapplied. So here’s an example of doing gamification wrong.

This came to me via a LinkedIn message where the correspondent pointed me to their recent blog article. (BTW, I don’t usually respond to these, but if I do, you’re going to run the risk that I poke holes. 😈) In the article, they were talking about using gamification to build organizational engagement. Interestingly, even in their own article, they were pointing to other useful directions unknowingly!

The problem, as claimed, is that working remote can remove engagement. Which is plausible. The suggestion, however, was that gamification was the solution. Which I suggest is a patch upon a more fundamental problem. The issue was a daily huddle, and this quote summarizes the problem: “there is zero to little accountability of engagement and participation “.  Their solution: add points to these things. Let me suggest that’s wrong.

What facilitates engagement is a sense of purpose and belonging. That is, recognizing that what one does contributes to the unit, and the unit contributes to the organization, and the organization contributes to society. Getting those lined up and clear is a great way to build meaningful engagement. Interestingly, even in the article they quote: “to build true engagement, people often need to feel like they are contributing to something bigger than themselves.” Right! So how does gamification help? That seems to be trying to patch a  lack of purpose. As I’ve argued before, the transformation is not digital first, it’s people first.

They segue off to microlearning, without (of course) defining it. They ended up meaning spaced learning (as opposed to performance support). Which, again, isn’t gamification but they push it into there. Again, wrongly. They do mention a successful instance, where Google got 100% compliance on travel expenses, but that’s very different than company engagement. It’s  got to be the right application.

Overall, gamification by extrinsic motivation can work under the right circumstances, but it’s not a solution to all that ails an organization. There are ways and times, but it’s all too easy to be doing gamification wrong. ‘Tis better to fix a broken culture than to patch it. Patching is, at best, a temporary solution. This is certainly an example.

 

Exploring Exploration

15 June 2021 by Clark Leave a Comment

Compass  Learning, I suggest, is action and reflection. (And instruction should be  designed action and  guided reflection.) What that action typically ends up being is some sort of exploration (aka experimentation). Thus, in my mind, exploration is a critical concept for learning. That makes it worth exploring exploration.

In learning, we must experiment (e.g. act) and observe and reflect on the outcomes. We learn to minimize surprise, but we also act to generate surprise. I stipulate that we do so when the costs of getting it wrong are low. That is, making learning  safe. So providing a safe sandbox for exploration is a support for learning. Similarly, have low consequences for mistakes generated through informal learning.

However, our explorations aren’t necessarily efficient nor effective. Empirically, we can make ineffective choices such as changing more than one variable at a time, or missing an area of exploration completely. For instruction, then, we need support. Many years ago, Wallace Feurzig argued for  guided exploration, as opposed to free search (the straw man used to discount constructivist approaches). So putting constraints on the task and/or the environment can support making exploration more effective.

Exploration also drives informal learning. Diversity on a team, properly managed, increases the likelihood of searching a broader space of solutions than otherwise. There are practices that increase the effectiveness of the search. Similarly, exploration should be focused on answering questions. We also want serendipity, but there should be guidelines that keep the consequences under control.

By making exploration safe and appropriately constrained, we can advance our understanding most rapidly, either helping some folks learn what others know, or advance what we all know. Exploration is a key to learning, and we need to understand it. Thus, we should also keep exploring exploration!

New recommended readings

8 June 2021 by Clark Leave a Comment

My Near Book ShelfOf late, I‘ve been reading quite a lot, and I‘m finding some very interesting books. Not all have immediate take homes, but I want to introduce a few to you with some notes. Not all will be relevant, but all are interesting and even important. I‘ll also update my list of recommended readings. So here are my new recommended readings. (With Amazon Associates links: support your friendly neighborhood consultants.)

First, of course, I have to point out my own Learning Science for Instructional Designers. A self-serving pitch confounded with an overload of self-importance? Let me explain. I am perhaps overly confident that it does what it says, but others have said nice things. I really did design it to be the absolute minimum reading that you need to have a scrutable foundation for your choices. Whether it succeeds is an open question, so check out some of what others are saying. As to self-serving, unless you write an absolute mass best-seller, the money you make off books is trivial. In my experience, you make more money giving it away to potential clients as a better business card than you do on sales. The typically few hundred dollars I get a year for each book aren‘t going to solve my financial woes! Instead, it‘s just part of my campaign to improve our practices.

So, the first book I want to recommend is Annie Murphy Paul‘s The Extended Mind. She writes about new facets of cognition that open up a whole area for our understanding. Written by a journalist, it is compelling reading. Backed in science, it’s valuable as well. In the areas I know and have talked about, e.g. emergent and distributed cognition, she gets it right, which leads me to believe the rest is similarly spot on. (Also her previous track record; I mind-mapped her talk on learning myths at a Learning Solutions conference). Well-illustrated with examples and research, she covers embodied cognition, situated cognition, and socially distributed cognition, all important. Moreover, there‘re solid implications for the redesign of instruction. I‘ll be writing a full review later, but here‘s an initial recommendation on an important and interesting read.  

I‘ll also alert you to Tania Luna‘s and LeeAnn Renninger‘s Surprise. This is an interesting and fun book that instead of focusing on learning effectiveness, looks at the engagement side. As their subtitle suggests, it‘s about how to Embrace the Unpredictable and Engineer the Unexpected. While the first bit of that is useful personally, it‘s the latter that provides lots of guidance about how to take our learning from events to experiences. Using solid research on what makes experiences memorable (hint: surprise!) and illustrative anecdotes, they point out systematic steps that can be used to improve outcomes. It‘s going to affect my Make It Meaningful  work!

Then, without too many direct implications, but intrinsically interesting is Lisa Feldman Barrett‘s How Emotions Are Made. Recommended to me, this book is more for the cog sci groupie, but it does a couple of interesting things. First, it creates a more detailed yet still accessible explanation of the implications of Karl Friston‘s Free Energy Theory. Barrett talks about how those predictions are working constantly and at many levels in a way that provides some insights. Second, she then uses that framework to debunk the existing models of emotions. The experiments with people recognizing facial expressions of emotion get explained in a way that makes clear that emotions are not the fundamental elements we think they are. Instead, emotions social constructs! Which undermines, BTW, all the facial recognition of emotion work.

I also was pointed to Tim Harford‘s The Data Detective, and I do think it‘s a well done work about how to interpret statistical claims. It didn‘t grip me quite as viscerally as the afore-mentioned books, but I think that‘s because I (over-)trust my background in data and statistics. It is a really well done read about some simple but useful rules for how to be a more careful reviewer of statistical claims. While focused on parsing the broader picture of societal claims (and social media hype), it is relevant to evaluating learning science as well.  

I hope you find my new recommended readings of interest and value. Now, what are you recommending to me? (He says, with great trepidation. ;)

The case for model answers (and a rubric)

3 June 2021 by Clark 4 Comments

Human body modelAs I‘ve been developing online workshops, I‘ve been thinking more about the type of assessment I want. Previously, I made the case for gated submissions. Now I find another type of interaction I‘d like to have. So here‘s the case for model answers (and a rubric).

As context, many moons ago we developed a course on speaking to the media. This was based upon the excellent work of the principals of Media Skills, and was a case study in my  Engaging Learning book. They had been running a face to face course, and rather than write a book, they wondered if something else could be done. I was part of a new media consortium, and was partnered with an experienced CD ROM developer to create an asynchronous elearning course.  

Their workshop culminated in a live interview with a journalist. We couldn‘t do that, but we wanted to prepare people to succeed at that as an optional extra next step. Given that this is something people really fear (apocryphally more than death), we needed a good approximation. Along with a steady series of exercises going from recognizing a good media quote, and compiling one, we wanted learners to have to respond live. How could we do this?

Fortunately, our tech guy came up with the idea of a programmable answering machine. Through a series of menus, you would drill down to someone asking you a question, and then record an answer. We had two levels: one where you knew the questions in advance, and the final test was one where you‘d have a story and details, but you had to respond to unanticipated questions.  

This was good practice, but how to provide feedback? Ultimately, we allowed learners to record their answers, then listen to their answers and a model answer. What I‘d add now would be a rubric to compare your answer to the model answer, to support self-evaluation. (And, of course, we’d now do it digitally in the environment, not needing the machine.)

So that‘s what I‘m looking for again. I don‘t need verbal answers, but I do want free-form responses, not multiple-choice. I want learners to be able to self-generate their own thoughts. That‘s hard to auto-evaluate. Yes, we could do whatever the modern equivalent to Latent Semantic Analysis is, and train up a system to analyze and respond to their remarks. However, a) I‘m doing this on my own, and b) we underestimate, and underuse, the power of learners to self-evaluate.  

Thus, I‘m positing a two stage experience. First, there‘s a question that learners respond to. Ideally, paragraph size, though their response is likely to be longer than the model one; I tend to write densely (because I am). Then, they see their answer, a model answer, and a self-evaluation rubric.  

I‘ll suggest that there‘s a particular benefit to learners‘ self-evaluating. In the process (particularly with specific support in terms of a mnemonic or graphic model), learners can internalize the framework to guide their performance. Further, they can internalize using the framework and monitoring their application to become self-improving learners.

This is on top of providing the ability to respond in richer ways that picking an option out of those provided. It requires a freeform response, closer to what likely will be required after the learning experience. That‘s similar to what I‘m looking for from the gated response, but the latter expects peers and/or instructors to weigh in with feedback, where as here the learner is responsible for evaluating. That‘s a more complex task, but also very worthwhile if carefully scaffolded.  

Of course, it‘d also be ideal if an instructor is monitoring the response to look for any patterns, but that‘s outside the learners‘ response. So that‘s the case for model answers. So, what say you? And is that supported anywhere or in any way you know?

Andragogy vs Pedagogy

13 April 2021 by Clark 24 Comments

Asked about why I used the word pedagogy instead of andragogy, I think it’s worth elaborating (since I already had in my reply ;) and sharing. In short, I think it‘s a false dichotomy. So here‘s my analysis of andragogy vs pedagogy.

Looking at Knowles‘ andragogy, I think it‘s misconstrued. What he talks about for adults is really true for all learners, taking into account their relative cognitive capability and amount of experience. So I fear that using andragogy will perpetuate the myth that pedagogy is a different learning approach (and keep kids in classrooms listening to lectures and answering rote questions). Empirically, direct instruction works (tho‘ it‘s interpretation is different than the name might imply, I once pointed out how it and constructivism properly construed both really say the same thing ;).  

There was an article  that posited five differences, and I see a major confound; the article‘s talking about andragogy as self-directed learning, and pedagogy as formal instruction. That‘s apples and oranges. It really is more about whether you‘re a novice or a practitioner level and the role of instruction. Age is an arbitrary element here, not a defining factor. Addressing each point:

1. Adults are self-directing learners. No, in things they know they need, they can be, but also they may have their bosses or coaches pointing them to courses. Plus, for areas where the adults are novices, they still need guided instruction. Also, owing to our bad K12 and higher ed, we’re not really enabling learners to be effective and efficient self-directed learners. Further, kids are self-directed about things they‘re interested in. But we make little effort to ground what we do (particularly K6) in any reason why this is on the syllabus.  

2. The role of learner experience. Yes, this matters, but it‘s a continuum. Also, you always want to base instruction on learner experience, because elaboration requires connecting to and building on existing knowledge. Yes, we do tend to do give kids abstract problems (particularly in math), which is contrary to good learning science. “Only two things wrong in education these days, the curriculum and the pedagogy, other than that we‘re fine.” Ahem. We teach the wrong things, badly.  

3. Adults generate interest in useful information. So does everyone, but that‘s not a matter of developmental level. Kids also prefer stuff that‘s relevant. We‘ve developed a curriculum for kids that is out of date, and we don‘t motivate it. Everyone has a curriculum, and there are degrees of self-direction, but it‘s not a binary division.

4. Adult readiness to learn is triggered by relevance (yeah, kind of redundant).Kids also learn better when there‘s a reason. Hence problem-based, service-based, and other such philosophy‘s of learning. Even direct instruction posits meaningful problems. Again, the article‘s comparing an ideal human learning model compared to a broken school model.  

5. What motivates learners are real life outcomes. Really, we‘ve covered this, everyone learns better when there‘s motivation. Children learn for grades because no one‘s made it meaningful for them to care!   Kids will pursue their learning when it makes sense to them. John Taylor Gatto made the case that kids could learn the entire K6 curriculum in 100 hours if they cared! Kids do learn outside of what‘s forced on them from schooling, be it Pokemon, polka, or porcupines.  

Thus, in the comparison between andragogy vs pedagogy, I come down on the side of pedagogy. It‘s the earlier term, and while ped does mean ‘kid‘, I still think it‘s really about learning design. Learning design should be aligned to our brains, not differentiated between child and adult. Yes, there are developmental differences, but they‘re a continuum and it‘s more a matter of capacity, it‘s not a binary distinction. That‘s my take, what‘s yours?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.