Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Sleep & Walking

6 August 2024 by Clark 2 Comments

We interrupt our regularly scheduled blog for this public service announcement. We will resume normal broadcasting after this brief message.

My late friend, Jay Cross, once wrote a post that said something to the effect of: “if you want to have better health, lose weight…<and a litany of other health benefits>…start walking.”  My reasons are in addition to that, actually. I also believe strongly in sleep. (Let me be clear, not sleep walking, of which I have no knowledge.) So here’re some thoughts on sleep & walking.

First, let’s talk sleep. I don’t know why (self-justification?), but I’ve regularly tracked the research on sleep. And, I find some robust results:

  • Most of us really are best off with 8 hours of sleep
  • Reading in the same place you sleep means you don’t read nor sleep as well
  • Keeping a regular sleep schedule helps
  • Naps are good

Also, of course, most people don’t do this. Personally, I try. It used to be about optimizing performance, but these days it’s more about maintaining performance! I can nap, though I usually don’t need to because of the first three. Also, I do try to get my eight hours (and am generally successful). I definitely don’t read in bed (tho’ occasionally I’ll get up to write something down so it’s off my brain and I can go back to sleep). And I try to be pretty regular in my sleep. I’m just following what’s recommended, and it seems to work. There’s more I’m not necessarily so good at, of course.

When it comes to walking, I don’t get it every day. That’s ok, because I try to exercise 5 days a week, and 3 of those are to use my torture device, er, exercise machine. Which I now do for 30 minutes 3 times a week, per the doc who asked for that much time at >100 beats per minute. As well as two strength things and some physio things to counteract my sedentary work life. I was doing 20+ minutes, with High Intensity Interval Training (10 of those mins are 30 secs intense, 30 secs not), and that’s still the case. I just extended the cool down.

The other two days a week I walk (sometimes more if we do it on our weekend). I have a set route, so my mind can be free. Annie Murphy Paul, whose book The Extended Mind I cited in my recent ‘post cognitive’ presentation (requires free membership) for the LDA, talks about the benefits of being out in nature. Of course, my walk is through my neighborhood, but it’s a bit wild (no sidewalks; wild animals can be spotted such as turkeys, hawks, quail, the occasional coyote).

My rationale for walking, however, in addition to health, is time to think! I come up with blog post topics, resolve questions, and more. Further, I don’t have headphones on, deliberately, so I’m aware but also allow what comes to mind. I also walk on the left side of the road, to face oncoming traffic, both a good idea and the law. (Too often I see folks walking with earphones, on the wrong side of the road, sometimes even with animals on a leash or a kid in a stroller! Yikes!)

We know that having time to reflect works. Being outside is also a boon. Together, it’s valuable time to think, as well as a healthy activity. I encourage you to follow good sleep practices and get in some walking (or equivalent, if there’re reasons that’s not possible). I’ve heard that walking conversations are also productive, but I work from home, so…

We now return you to your regularly scheduled blog, already in progress.

Emotions

30 July 2024 by Clark 2 Comments

Emotion matters. Yes, largely it’s a cultural construct, as Lisa Feldman Barrett tells us. Still, they can help or hinder learning. When designing games or creating meaningful learning, they matter. But they also affect us in our daily activities.

So, my previous post, on misinformation, is personal. I’ve frustration that family members are buying into some of it. I try to maintain a calm demeanor, but it’s challenging. Still, it’s a battle I’ve not yet given up on. Yet, I’m also not immune to the larger effects of emotion.

A curve showing low performance for low and high arousal, but a peak of performance in between.What we know, from the Yerkes-Dodson Curve, is that a little bit of arousal (read: emotion) can help, but too much can hurt. What isn’t clear from my conceptual rendering is what amount is the ‘right’ amount of arousal for optimal performance. I’ll suggest that for learning, it’s pretty low, as learning is stressful (another synonym for arousal). And I do suggest we manipulate emotions (which I admit is shorthand for motivation, anxiety, and confidence, which aren’t the regular definition) to successfully achieve learning outcomes.

However, even general functioning gets difficult when things are stressful. When I look at the design of casinos, for instance, (a way to cope with the too many times I have to go to Vegas for conferences), I note that they deliberately have low information, lights, no clocks, as an information-sparse environment. It is deliberate, so that you’re more focused on the enticements. They want you confused because you’re then more vulnerable to predations.

I fear that there’s a bit of this in our culture. For instance, fear sells: more alarmist headlines lead to more engagement. Which is good for the news business, but perhaps bad for us in several ways. For one, there’s a vested interest in focus on the alarming, not the bigger picture. Similarly, twisting stories to get emotional engagement isn’t unknown. That can be entertaining, but when it’s the information we depend on is manipulated, it’s problematic. Reducing support for education similarly reduces the intelligence people can apply to analysis.

I struggled to focus to find a topic this week, and I realize it’s because of the informational turmoil that’s currently in play. So, I thought I’d write about it (for better or worse ;).  Exaggeration of issues for the sake of clicks and sales, I’ll suggest isn’t a good thing. I’m willing to be wrong, but I worry that we’re over-excited. Our emotions are being played on, for purposes that are not completely benign. That’s a worry. That’s what’s worrying me, what about you?

Misinformation (and the fighting thereof)

23 July 2024 by Clark Leave a Comment

One of the banes of our corporate existence is the existence of myths. (We seem to be immune to conspiracy theories, at least.) I’ve been fighting them in myriad ways, over the years. Approaches include a book, talks, and more. We also need ways to vet new information for veracity. Here are a few steps taken recently for misinformation and the fighting thereof.

First, at the Learning Development Accelerator (LDA), we created a research checklist (warning: members only, but at the free level). This was supposed to be a way to vet claims, starting with the practical, but eventually getting into actually evaluating the research. We don’t necessarily recommend this, by the way. It’s probably better to trust research translators unless you’re really willing to dive into the details. (Translators: folks who’ve demonstrated a reliable ability to both take research and extract the meaningful principles and cut through hype).

Then, Matt Richter, my colleague in the LDA, recommended Alex Edman’s book May Contain Lies. I’ve read it and found it an accessible and thoughtful treatment of analyzing claims and data (recommended). Matt even prompted the LDA to host a ‘meet the author’ with Alex. That’s available to view (may also require free membership).

In it, he reiterated something in the book that I found valuable. He talked about a ‘ladder’ of investigation. Telegraphically, it’s this:

  1. Statement is not fact (the statement must be accurate)
  2. Fact is not data (the fact must be representative)
  3. Data is not evidence (the data must be conclusive)
  4. Evidence is not proof (the evidence must be universal)

What is being said here is that there are several steps to evaluate what folks want to tell (sell) you. If someone just quotes a statement, it’s not necessarily valid unless it’s accurate. Someone could make a claim that’s not actually true (as happens). Then, that statement alone is not data, unless the statement is representative of the general tenor of thought. For instance, a few positive anecdotes aren’t necessarily indicative of everyone’s experience. Then, representative quotes actually have to be sufficient against any other explanations for the same outcome. For instance, finding out that people like something may not be indicative of its actual efficacy. Finally, the evidence has to apply in your situation, not just theirs.

He used some examples, for instance books where they draw inferences from a few successful companies, without determining that other companies with the inferred characteristics also succeed. What’s nice is he has boiled down what can be an overwhelming set of rules into a simple framework. Misinformation isn’t diminishing, it even seems to be increasing. There’s increasing needs to separate out bogus claims for legitimate. We need to be rallying around misinformation and the fighting thereof. Here’re some tools. Good luck!

The easy answer

16 July 2024 by Clark Leave a Comment

In working on something, I’m looking at the likely steps people take. Of course, I’m listing them from easiest to most useful (with the hope that folks understand they should take the latter). However, it’s making me think that, too often, people are looking at the easy answer, not the most accurate one. Because they really don’t know the problem. When does the easy answer make sense? Are we letting ourselves off the hook too much?

So, for instance, in learning we really should do analysis when someone asks for something. “We need a course on X.” “Ok, what tells you that you need this, and how will we know when it’s worked?” In a quick family convo, we established that this sort of un-analytical request is made all the time:

  • “Why isn’t my plant blooming?” (It’s not the season.)
  • “Fix this code.” (The input’s broken, not the code.)
  • …

Yet, people actually don’t do this up-front analysis. Why? It’s harder, it takes more time, it slows things down, it costs more. Besides, we know what the problem is.

DivergeConvergeProblemSolutionExcept, we don’t know what the problem is. Too often, the question or request is making some assumptions about the state of the world that may not be true. It may be the right answer, but it may not. Ensuring that you’ve identified the problem correctly is the first part of the design process, and you should diverge on exploration before you converge on a solution. That’s the double diamond, where you first explore the problem, before you explore a solution.

Perhaps counter-intuitively, this is more efficient. Why? Because you’re not expending resources solving the wrong problem. Are you sure you’ve gotten it right? How do you know when to take the easier path? If you know the answer you need, you’re better equipped to choose the level of solution you need. If you don’t know the question, however, and make assumptions about the root cause, you can go off the rails. And, end up spending effort you didn’t need to.

Look, I live in the real world. I have to take shortcuts (heck, I’m lazy ;). And I do. However, I like to do that when I know the answer, and know that the outcome is good enough to meet the need. I’ll go for the easy answer, if I know it’ll solve the problem well enough. But I can’t if I don’t know the question or problem, and just assume. And we know what happens when we ass-u-me.

A Learning Science Conference?

9 July 2024 by Clark Leave a Comment

learning science conference 2024 banner: "Online. Asynchronous & Live Sessions"In our field of learning design (aka instructional design), it’s too frequently the case that folks don’t actually know the underlying learning science that guides processes, policies, and practices. Is this a problem? If it is, what is the remedy?

Consider that you wouldn’t want an electrician that didn’t understand the principles of electricity. Such a person might not understand, for instance, the importance of grounding, leavning open the possibility of burning down the house.

So, too, with learning. If you don’t understand learning science, you might not understand why learning styles is a waste of money, the lack of value of information alone, nor that you should make alternatives to the right answer reflect typical misconceptions. There’s lots more: models, context, and feedback are also included in the topics that most folks don’t understand the nuances of.

If you don’t understand learning science, you waste money. You are likely to design ineffective learning, wasting time and effort. Or you might expend unnecessary effort on things that don’t have an impact. Overall, it’s a path to the poorhouse.

Of course, there are other reasons why we don’t have the impact we should: mismatched expectations on costs and time, SME recalcitrance and hubris, and more. Still, you’re better equipped to counter these problems if you can justify your stance from sound research.

The way to address this, of course, also isn’t necessarily easy. You might read a book, though some can mislead you. And, you still don’t get answers if you have questions. Or, you could pay for a degree, but those can be quite expensive and ineffective. Too frequently they spend time on process and not enough on principles.

There’s another option, one we’re providing. What if you could get the core essentials curated for their relevance? Further, this content is provided for you asynchronously, buttressed by the opportunity for meaningful interaction, in a tight time frame (at different times depending on your location)? Then, the presentation is by some of the most important names in the field, individuals who’ve reliably demonstrated an ability to translate academic research into comprehensible principles? And, finally, this is delivered at an appropriate cost? Does that sound like a valuable proposition?

I’d like to invite you to the Learning Science Conference, put on by the Learning Development Accelerator. Faculty already agreed include Ruth Clark (co-author of eLearning & The Science of Instruction), myself (author of Learning Science for Instructional Designers), Matt Richter (co-director of the Thiagi group), and Nidhi Sachdeva (faculty at University of Toronto). The curriculum covers 9 of the most important elements of learning science including learning, myths and barriers, motivation, informal and social learning, media, and evaluation.

This event is designed to leave you with the foundations necessary to be able to design learning experiences that are both engaging and effective, as well as dealing with the expected roadblocks to success. Frankly, we see little else that’s as comprehensive and practical. We hope to see you there!

2024 ITA Jay Cross Memorial Award: Ryan Tracey

5 July 2024 by Clark Leave a Comment

The Internet Time Alliance Memorial Award in memory of Jay Cross is presented to a workplace learning professional who has contributed in positive ways to the field of Informal Learning and is reflective of Jay’s lifetime of work.

Recipients champion workplace and social learning practices inside their organization and/or on the wider stage. They share their work in public and often challenge conventional wisdom. The Award is given to professionals who continuously welcome challenges at the cutting edge of their expertise and are convincing and effective advocates of a humanistic approach to workplace learning and performance.

We announce the award on 5 July, Jay’s birthday.

Following his death in November 2015, the partners of the Internet Time Alliance – Jane Hart, Charles Jennings, Clark Quinn, and Harold Jarche – resolved to continue Jay’s work. Jay Cross was a deep thinker and a man of many talents, never resting on his past accomplishments, and this award is one way to keep pushing our professional fields and industries to find new and better ways to learn and work.

The 9th Annual Internet Time Alliance Jay Cross Memorial Award for 2024 is presented to Ryan Tracey

Over the past 25 years Ryan has consistently demonstrated a resilient approach to, and advocacy for, workplace learning. He speaks pragmatically about supporting learning with an irreverent yet supportive style. His blog, e-Learning Provocateur is a source of insight. He’s recognized for looking beyond formal learning to social and informal learning, recognizing that learning happens, and the job of L&D is to support and facilitate it, not to be completely responsible for it.

Ryan has served in multiple roles for organizations across industries and government, moving from academic products through organizational learning & development and innovation roles to his current position as capability manager at Macquarie Group. As a learning professional, Ryan has also demonstrated support for colleagues. He has pointed to opportunities, given advice, and served generously.

Ryan has also contributed widely to the global profession through membership of the editorial board for the Association for Computing Machinery’s eLearn Magazine and other committees.

His children’s book ‘Ryan the Lion’ which explores themes of social tolerance, self-esteem and personal identity reflects Ryan’s own beliefs in human-centred learning and development.

Personally, he has been a gracious host to several of us during visits to Sydney.

For his contributions and continual advocacy for going beyond instruction Ryan is the 2024 recipient of the award.

Break it down!

2 July 2024 by Clark 2 Comments

jigsaw puzzle piecesIn our LDA Forum, someone posted a question asking about taking Cathy Moore’s Action Mapping for soft skills, like improving team dynamics. Now, they’re specifically asking about a) people with experience, and b) in the context of not-for-profits, so…I’m not a good candidate to respond. However, what it does raise is a more common problem: how do you train things that are more ephemeral. Like, for instance, leadership, or communication? My short answer is “break it down”. What do I mean? Here’re some thoughts, and I welcome feedback!

Many moons ago, I co-wrote a paper on evaluating social media impacts. There are the usual metrics, like ‘engagement’. That is, are people using the system? Of course, for companies charging for their platform, this could be as infrequent as a person accessing it once a month. More practically, however, it should be a person hitting it at least several times a week, or even several times a day! If you’re communicating, cooperating, and collaborating, you really should be interacting at a fair frequency.

I, on the other hand, argued for more detailed implications. If you’re putting it into a sales team, you should expect not only messages, but more success on sales, shorter sales cycles, etc. So you can get more detailed. These days, you can do even more, and have the system actually tag what the messages are about and count them. You can go deeper.

Which is what I think is the answer here. What skills do you want? For an innovation demo with Upside Learning, I argued we should break it down. That includes how to work out loud, and how to provide feedback, and how to run group meetings. (I’m just reading Alex Edman’s May Contain Lies, and it contains a lot of details about how to consider data and evidence.) We can look for more granular evidence. Even for skills like team dynamics, you should be looking at what makes good dynamics. So, things like making it safe yet accountable, providing feedback on behavior not on the person, valuing diversity, etc. There should be specific skills you want to develop, and assess. These, then, become the skills you design your learning to accomplish. You are, basically, creating a curriculum of the various skills that comprise the aggregated topic.

It may be that you assess a priori, and discover that only some are missing in your teams. That upfront analysis should happen regardless, but is too infrequent. The interlocutor here also mentioned the audience complaining about the time for analysis. Yep, that’s a problem. Reckon you have to sell the whole package: analyzing, designing, and evaluating for impact on performance, not just some improvement. Yet, compared to throwing money away? Seems like targeting intervention efforts should be a logical sell. If only we lived in a rational world, eh?

Still, overall, I think that these broad programs break down into specific skills that can be targeted and developed. And, we should. Let’s not get away with vague intentions, explanations, and consequently no outcomes. Let’s do the work, break it down, and develop actual skills. That, at least, is my take, I welcome hearing yours!

Diving or surfacing?

25 June 2024 by Clark Leave a Comment

Bubbles in water with light behindIn my regular questing, one of the phenomena I continue to explore is design. Investigating, for instance, reveals that, contrary to recommendations, designers approach practice more pragmatically. That’s something I’ve been experiencing both in my work with clients and recent endeavors. So, reflecting, are and should folks be diving or surfacing?

The original issue is how designers design. If you look at recommendations, they typically recommend starting at the top level conceptualization and work down, such as Jesse James Garrett’s Information Architecture approach (PDF of the Elements of User Experience; note that he puts the highest level of conceptualization at the bottom and argues to work up). Empirically, however, designers switch between top-down and bottom-up. What do I do?

Well, it of course depends on the project. Many times (and, ideally), I’m brought in early, to help conceptualize the strategy, leveraging learning science, design, organizational context, and more. I tend to lead the project’s top-level description, creating a ‘blueprint’ of where to go. From there, more pragmatic approaches make sense (e.g. bringing in developers). Then, I’m checking on progress, not doing the implementation. I suppose that’s like an architect. That is, my role is to stay at the top-level.

In other instances, I’m doing more. I frequently collaborate with the team to develop a solution. Or, sometimes, I get concrete to help communicate the vision that the blueprint documents. Which,  in working with an unfamiliar team, isn’t unusual. That ‘telepathy’ comes with getting to know folks ;).

In those other instances, I too will find out that pragmatic constraints influence the overarching conceptualization, and work back up to see how the guidelines need to be adapted to account for the particular instance. Or we need to deconnect from the details to remember what our original objective is. This isn’t a problem! In general, we should expect that ongoing development unearths realities that weren’t visible from above, and vice versa. We may have good general principles, (e.g. from learning science), but then we need to adapt them to our circumstances, which are unlikely to exactly match. In general, we need to abstract the best principles, and then de- and re-contextualize.

I find that while it’s harder work to wrestle with the details (more pay for IDs! ;), it’s very worthwhile. What’s developed is better as a result of testing and refining. In fact, this is a good argument about why we should iterate (and build it into our timelines and budgets). It’s hubris to assume that ‘if we build it, it is good’. So, let’s not assume we can either be diving or surfacing, but instead recognize we should cycle between them. Start at the top and work down, but then regularly check back up too!

Learning Debt?

18 June 2024 by Clark Leave a Comment

In our LDA conversation with David Irons, User Experience (UX) Strategist, for our Think Like A…series, he mentioned a concept I hadn’t really considered. The concept is ‘design debt’, as an extension of the idea of ‘tech debt’. I was familiar with the latter, but hadn’t thought of it from the UX side. Nor, the LXD side! Could we have a learning debt?

So, tech debt is that delta between what good technology design would suggest, and what we do to get products out the door. So, for instance, using an algorithm for sorting that’s quicker with small numbers of entries but doesn’t handle volume. The accrued debt only gets paid back once you go back and redesign. Which, too often, doesn’t happen, and the debt accumulates. The problems can mean it’s difficult to expand capabilities, or keep performance from scaling. I think of how Apple OS updates occasionally don’t really add new features but instead fix the internals. (Hasn’t seemed to happen as much lately?)

Design debt is the UX equivalent. We see expedient shortcuts or gaps in the UX design, for instance.  As Ward Cunningham, an agile proponent, says:

Design debt is all the good design concepts of solutions that you skipped in order to reach short-term goals. It’s all the corners you cut during or after the design stage, the moments when somebody said: “Forget it, let’s do it the simpler way, the users will make do.”

It’s a real thing. You may experience it when entering a phone number into a field, and then hear it’s not in the proper format (though there was no prior information about what the required format is). That’s bad design, and could (and should) be fixed.

This could be true in learning, too. We could we have ‘learning debt’. When we make practice (and I should note for previous and future posts that includes any assessment where learners apply the knowledge we’ve provided) about knowledge instead of application of knowledge, for instance, we’re creating a gap between what they’ve learned to do and what they need to do. That’s a problem. Or when we put in content because someone insists it has to be there rather than a designer deciding it’s necessary for the learning. Which adds to cognitive load and undermines learning!

How often do we go back and improve our courses? If we’re offering workshops or some other instruction, we can adapt. When we create elearning, however, we tend to release it and forget it. When I ask audiences if they have any legacy courses that are out of date and unused but still hanging around their LMS, everyone raised their hands. We may update courses whose info has changed, but how many times do we go back and redo asynchronous courses because we’ve tracked and have evidence that it’s not working sufficiently? Yes, I acknowledge it happens, but not often enough. (*cough* We don’t evaluate our courses sufficiently nor appropriately. *cough*)

Ok, so everyone makes tradeoffs. However, which ones should we make? The evidence suggests erring on the side of better practice and less content. Prototyping and testing is another step that we can take to remove debt up front. With UX, lacks in design early on cost more to fix later. We don’t typically go back and fix, but we can and should. Better yet, test and fix before it goes live. Another way to think about it is that learning debt is money wasted. Build, run, and not learn, or build, test, and refine until learning happens?

There are debts we can sustain, and ones we can’t. And shouldn’t. When our learning doesn’t even happen, that’s not sustainable. Our Minimum Viable Product has to be at least viable. Too often, it’s not. Let’s ensure that viable means achieves an outcome, eh? It might not be optimal improvement, or as minimum in time as possible, but at least it’s achieving an outcome. That’s better than releasing a useless product (despite no one knowing). , even if we get paid (internally or externally). What am I missing?

 

Reflecting on adaptive learning technology

11 June 2024 by Clark 1 Comment

My last real job before becoming independent (long story ;) was leading a team developing an adaptive learning platform. The underlying proposition was the basis for a topic I identified as one of my themes. Thinking about it in the current context I realize that there’re some new twists. So here I’m reflecting on adaptive learning technology.

So, my premise for the past couple of decades is to decouple what learners see from how it’s delivered. That is, have discreet learning ‘objects’, and then pull them together to create the experience. I’ve argued elsewhere that the right granularity was by learning role: concepts are separate from examples, from practice, etc. (I had team members participating in the standards process.) The adaptive platform was going to use these learning objects to customize the sequence for different learners. This was both within a particular learning objective, and across a map of the entire task hierarchy.

The way the platform was going to operate was typical in intelligent tutoring systems, with a twist. We had a model of the learner, and a model of the pedagogy, but not an explicit model of expertise. Instead, the expertise was intrinsic to the task hierarchy. This was easier to develop, though unlikely to be as effective. Still, it was scalable, and using good learning science behind the programming, it should do a good job.

Moreover, we were going to then have machine learning, over time, improve the model. With enough people using the system, we would be able to collect data to refine the parameters of the teaching model. We could possibly be collecting valuable learning science evidence as well.

One of the barriers was developing content to our specific model. Yet I believed then, and still now, that if you developed it to a standard, it should be interoperable. (We’re glossing over lots of other inside arguments, such as whether smart object or smart system, how to add parameters, etc.) That was decades ago, and our approach was blindsided by politics and greed (long sordid story best regaled privately over libations). While subsequent systems have used a similar approach (*cough* Knewton *cough*), there’s not an open market, nor does SCORM or xAPI specifically provide the necessary standard.

Artificial intelligence (AI) has changed over time. While evolutionary, it appears revolutionary in what we’ve seen recently. Is there anything there for our purposes? I want to suggest no. Tom Reamy, author of Deep Text, argues that hybrids of symbolic and sub-symbolic AI (generative AI is an instance of the latter) have potential, and that’s what we were doing. Systems trained on the internet or other corpuses of images and/or text aren’t going to provide the necessary guidance. If you had a sufficient quantity of data about learning experiences with the characteristics of your own system, you could do it, but if it exists it’s proprietary.

For adaptive learning about tasks (not knowledge; a performance focus means we’re talking about ‘do’, not know), you need to focus on tasks. That isn’t something AI really understands, as it doesn’t really have a way to comprehend context. You can tell it, but it also doesn’t necessarily know learning science either (ChatGPT can still promote learning styles!). And, I don’t think we have enough training data to train a machine learning system to do a good job of adapting learning. I suppose you could use learning science to generate a training set, but why? Why not just embed it in rules, and have the rules work to generate recommendations (part of our algorithm was a way to handle this)? And, as said, once you start running you will eventually have enough data to start tuning the rules.

Look, I can see using generative AI to provide text, or images, but not sequencing, at least not without a rich model. Can AI generate adaptive plans? I’m skeptical. It can do it for knowledge, for sure, generating a semantic tree. However, I don’t yet see how it can decide what application of that knowledge means, systematically. Happy to be wrong, but until I’m presented with a mechanism, I’m sticking to explicit learning rules. So, where am I wrong?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok