Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: top 10

Citations

28 November 2018 by Clark 2 Comments

Following on my thoughts on writing yesterday, this was a topic that didn’t fit (the post got too long ;).  So here we go..  Colleagues have written that citations are important. If you’re making a claim, you should be able to back it up. On the other hand, if you’re citing what you think is ‘received wisdom’, do you need to bother?  Pondering…

Now, citations can interfere with the flow, I believe. If not the reading, they can interfere with the flow of my writing! (And, I’ve been accused of ‘name dropping‘, where instead I believe it’s important to both acknowledge prior work and show that you know what’s been done.) Still, it’s important to know what to cite, and when.

I admit that I don’t always cite the claims I make. Because, I take it as a given.  I may say something like “we know” or otherwise presume that what I’m saying is accepted premise. One problem, of course, is that I don’t know what others know (and don’t). And, of course, that this isn’t an official article source, this is my blog ;). Still, when I’m talking about something new to me (like thoughts from books), I will cite the locus.

Articles are different. When I write those, I try to provide sources. In both cases I generally don’t go to the extent of journal article links, because I’m not expect that folks have easy access to them, and so prefer to cite more commonly available resources, like books that have ‘digested’ the research.

And when I write ‘take down’ articles, I don’t cite the offender. It’s to make the point, not shame anyone. If you’re really curious, I’m sure you can track it down.

And, realize I don’t have easy access to journals either. Not affiliated with an institution, I don’t have access to the original articles behind a pay wall. I tend to depend on people who summarize including books and articles that summarize. Still, I’ve a grounding for over a decade in the original materials and am able to make inferences. And of course occasionally I’ll be wrong. Sometimes, I’ll even admit it ;).

The issue really is when do you need to make a citation. And I reckon it’s when you’re stating something that folks might disagree with. And I can’t always anticipate it. So I’ll try to consistently point to the basis for any claims I think might be arguable, or state that it’s my (NSH :) opinion.  And you can always ask!  Fair enough?

Realities: Why AR over VR

29 August 2018 by Clark 3 Comments

In the past, I’ve alluded to why I like Augmented Reality (AR) over Virtual Reality. And in a conversation this past week, I talked about realities a bit more, and I thought I’d share. Don’t get me wrong, I like VR  alot, but I think AR has the bigger potential impact.  You may or may not agree, but here’s my thinking.

In VR, you create a completely artificial context (maybe mimicking a real one).  And you can explore or act on these worlds. And the immersiveness has demonstrably improved outcomes over a non-immersive experience.  Put to uses for learning, where the affordances are leveraged appropriately, they can support  deep practice. That is, you can minimize transfer to the real world, particularly where 3D is natural. For situations where the costs of failure are high (e.g. lives), this is  the best practice before mentored live performance. And, we can do it for scales that are hard to do in flat screens: navigating molecules or microchips at one end, or large physical plants or astronomical scales at the other. And, of course, they can be completely fantastic, as well.

AR, on the other hand, layers additional information on  top of our existing reality. Whether with special glasses, or just through our mobile devices, we can elaborate on top of our visual and auditory world.  The context exists, so it’s a matter of extrapolating on it, rather than creating it whole. On the other hand, recognizing and aligning with existing context is hard.  Yet, being able to make the invisible visible where you already are, and presumably are for a reason that makes it intrinsically motivating, strikes me as a big win.

First, I think that the learning outcomes from VR are great, and I don’t mean to diminish them. However, I wonder how general they are, versus being specific to inherently spatial, and potentially social, learning.  Instead, I think there’s a longer term value proposition for AR. There’s less physical overhead in having your world annotated versus having to enter another one. While I’m not sure which will end up having greater technical overhead, the ability to add information to a setting to make it a learning one strikes me as a more generalizable capability.  And I could be wrong.

Another aspect is of interest to me, too. So my colleague was talking about mixed reality, and I honestly wondered what that was. His definition sounded like  alternate reality, as in alternate reality games. And that, to me, is also a potentially powerful learning opportunity. You can create a separate, fake but appearing real, set of experiences that are bound by story and consequences of action that can facilitate learning. We did it once with a sales training game that intruded into your world with email and voicemail. Or other situations where you have situations and consequences that intrude into your world and require decisions and actions. They don’t have  real consequences, but they do impact the outcomes. And these could be learning experiences too.

At core, to me, it’s about providing either deep practice or information at the ‘teachable moment’. Both are doable and valuable. Maybe it’s my own curiosity that wants to have information on tap, and that’s increasingly possible. Of course, I love a good experience, too. Maybe what’s really driving me is that if we facilitate meta-learning so people are good self-learners, having an annotated world will spark more ubiquitous learning. Regardless, both realities are good, and are either at the cusp or already doable.  So here’s to real learning!

Reading List?

31 July 2018 by Clark 1 Comment

I saw another query about ‘reading list recommendations’ (e.g. as an addition to Millennials, Goldfish,  Other Training Misconceptions  ;), and I thought I’d weigh in, with a different spin.  What qualifies what books should you read?  Maybe your level of expertise?  So, here is a reading list for what books should you read  depending on where you are as a designer.

Note, this is a relatively personal list, and not the mainstream ID texts. It’s not Gagné, Brown & Green, Dick & Carey, or even Horton. These are books that either get you going without those, or supplement then once you are going.  And they’re ones I know, and I can’t read  everything!

Beginning (e.g. the accidental instructional designer):

Cammy Bean’s The Accidental Instructional Designer.  Now, I think the fact that this book  needs to exist is kind of an indictment of our field. Do we have accidental surgeons?  Not to the extent we  prepare for them!  Still, it’s a reality, and Cammy’s done the field a real service in this supremely practical and  accessible book.

Michael Allen’s Guide to eLearning. Michael’s got the scientific credibility, the practical experience, a commitment to making things right, and a real knack for simplifying things. This book, with it’s SAM and CCAF framework, provides a very good go-to-whoa process for designing learning experiences what will work.

Practicing Designer:

Julie Dirksen’s  Design for How People Learn is a really accessible introduction to learning science, boiling it down into practical terms as a process for design.  With great illustrations, it’s an easy but important read.

Donald Norman’s  Design of Everyday Things  is pretty much key reading for  anyone who designs for people. (OK, so I’m biased, because he was my Ph.D. adviser,  but  I’m not the only one who says so.) Not specific to learning, but one of those rare books that is pretty much guaranteed to change the way you look at the world.

Going deeper:

Brown, Roediger, & McDaniel’s  Make it Stick. 10 points from learning science about what works.

Ericsson’s  Peak, a book about what makes real expertise, with a focus on the nuances of  deliberate practice.

Patti Shank’s new  Make it Learnable series gets into specifics on learning design. Comprehensive and yet accessible.

Ruth Clark’s  eLearning & The Science of Instruction (with Dick Mayer) and/or  Efficiency in Learning  with Sweller & Nguyen).

Of course, there are separate topics:

Mobile: my own  Designing mLearning  and anything  by Chad Udell  (e.g.  Learning Everywhere)

Games:  my own  Engaging Learning  and anything game from Karl Kapp (e.g. the new book with Sharon Boller, Play to Learn).

Realities:  Koreen Pagano’s  Immersive Learning, and perhaps Kapp & O’Driscoll’s Learning in 3D as a foundation.

Performance Support: Allison Rossett’s Job Aids & Performance Support  and Gottfredson & Mosher’s  Innovative  Performance Support.  

Social: Conner & Bingham’s  The New  Social Learning and Jane Bozarth’s Social  Media for Trainers.

Going Broader:

Informal:  Jay Cross’  Informal Learning.  Talking about the rest of learning besides formal.

Performance Ecosystem: Marc Rosenberg’s  Beyond eLearning  (the start), and/or  my  Revolutionize Learning & Development.  About L&D strategy; going beyond just courses to meet the real needs of the org.

Going Really Deep (if you really want to geek out on learning and cognitive science):

Daniel Kahnemann’s  Thinking Fast and Slow about how our brains don’t work logically. Or the behavioral economics stuff.

Andy Clark’s  Being There about the newer views on cognition including situated cognition.

Of course, there’re lots more, depending on whether you’re interested in assessment, evaluation, content, or more. But this is my personal and idiosyncratic set of recommendations. There are other people I’d point you to, too, but this is the suite of books you can, and should, get your mitts on. Now, what’s on  your list?

 

 

Microlearning Malarkey

27 June 2018 by Clark 7 Comments

Someone pointed me to a microlearning post, wondering if I agreed with their somewhat skeptical take on the article. And I did agree with the skepticism.  Further, it referenced another site with worse implications. And I think it’s instructive to take these apart.  They are emblematic of the type of thing we see too often, and it’s worth digging in. We need to stop this sort of malarkey. (And I don’t mean microlearning as a whole, that’s another issue; it’s articles like this one that I’m complaining about.)

The article starts out defining microlearning as small bite-sized chunks. Specifically: “learning that has been designed from the bottom up to be consumed in shorter modules.” Well, yes, that’s one of the definitions.  To be clear, that’s the ‘spaced learning’ definition of microlearning. Why not just call it ‘spaced learning’?  

It goes on to say “each chunk lasts no more than five-then minutes.” (I think they mean 10). Why? Because attention. Um, er, no.  I like JD Dillon‘s explanation:  it needs to be as long as it needs to be, and no longer.

That attention explanation?  It went right to the ‘span of a goldfish’. Sorry, that’s debunked (for instance, here ;).  That data wasn’t from Microsoft, it came from a secondary service who got it from a study on web pages. Which could be due to faster pages, greater experience, other explanations. But not a change in our attention (evolution doesn’t happen that fast and attention is too complex for such a simple assessment).  In short, the original study has been misinterpreted. So, no, this isn’t a good basis for anything having to do with learning. (And I challenge you to find a study determining the actual attention span of a goldfish.)

But wait, there’s more!  There’s an example using the ‘youtube’ explanation of microlearning. OK, but that’s the ‘performance support’ definition of microlearning, not the ‘spaced learning’ one. They’re two different things!  Again, we should be clear about which one we’re talking about, and then be clear about the constraints that make it valid. Here? Not happening.  

The article goes on to cite a bunch of facts from the Journal of Applied Psychology. That’s a legitimate source. But they’re not pulling all the stats from that, they’re citing a secondary site (see above) and it’s full of, er, malarkey.  Let’s see…

That secondary site is pulling together statistics in ways that are  thoroughly dubious. It starts citing the journal for one piece of data, that’s a reasonable effect (17% improvement for chunking). But then it goes awry.  For one, it claims playing to learner preferences is a good idea, but the evidence is that learners don’t have good insight into their own learning. There’s a claim of 50% engagement improvement, but that’s a mismanipulation of the data where 50% of people would like smaller courses. That doesn’t mean you’ll get 50% improvement. They also make a different claim about appropriate length than the one above – 3-7 minutes – but their argument is unsound too. It sounds quantitative, but it’s misleading. They throw in the millennial myth, too, just for good measure.

Back to the original article, it cites a figure not on the secondary site, but listed in the same bullet list: “One minute of video content was found to be equal to about 1.8 million written words”.  WHAT?  That’s just ridiculous.  1.8 MILLION?!?!?  Found by who?  Of course, there’s no reference. And the mistakes go on. The other two bullet points aren’t from that secondary site either, and also don’t have cites.  The reference, however could mislead you to believe that the rest of the statistics were also from the journal!

Overall, I’m grateful to the correspondent who pointed me to the article. It’s hype like both of these that mislead our field, undermine our credibility, and waste our resources. And it makes it hard for those trying to sell legitimate services within the boundaries of science.  It’s important to call this sort of manipulation out.  Let’s stop the malarkey, and get smart about what we’re doing and why.  

Nuances Matter

30 May 2018 by Clark 1 Comment

I’ve argued before that the differences between well-designed and well-produced learning, and just well-produced learning, are subtle. And, in general, nuances matter. So, in my recent book, the section on misconceptions spent a lot of time unpacking some terms. The goal there was ensuring that the nuances were understood. And a recent event triggered even more reflection on this.

Learnnovators, a company I’ve done a couple of things with (the Deeper eLearning series, and the Workplace of the Future project), interviewed me once quite a while ago. I was impressed then with the depth of their background research and thoughtful questions. And they recently asked to interview me on the book. Of course, I agreed. And again they impressed me with the depths of their questions, and I realized in this case there was something specific going on.

In their questions, they were unpacking what common concerns would be about some of the topics.  The questions dug in to ways in which people might think that the recommendations are contrary to personal experience, and more.  There were very specifically looking for ways in which folks might think to reject the findings.  And that’s important. I believe I had addressed most of them in the book, but it was worth revisiting them.

And that’s the thing that I think is important about this for our practice. We can’t just do the surface treatment. If we just say: “ok we need some content, and then let’s write a knowledge test on it”, we’ve let down our stakeholders.  If we don’t know the cognitive properties of the media we use, don’t sweat the details about feedback on assessment, don’t align the practice to the needed performance, etc., we’re not doing our job!

And I don’t mean you have to get a Ph.D. in learning science, but you really do need to know what you’re doing. Or, at least, have good checklists and quick reference guides to ensure you’re on track. Ideally, you review your processes and tools for alignment to what’s known. And the tools themselves could have support. (Ok, to a limit, I’ve seen this done to the extent of handcuffs on design.)

Nuances matter,  if you care about the outcomes (and if you don’t, why bother? ;).  I’ve been working on both a checklist and on very specific changes that apply to various places in design processes that represent the major ways folks go wrong. These problems are relatively small, and easy to fix, and are designed to yield big improvements. But unless you know what they are, you’re unlikely to have the impact you intend.

SMEs for Design

25 April 2018 by Clark Leave a Comment

In thinking through my design checklist, I was pondering how information comes from SMEs, and the role it plays in learning design. And it occurred to me visually, so of course I diagrammed it.

The problem with getting design guidance from SMEs is that they literally can’t tell us what they do!  The way our brains work, our expertise gets compiled away. While they  can tell us what they know (and they do!), it’s hard to get what really needs to be understood.  So we need a process.

Mapping SME Qs to ID elementsMy key is to focus on the  decisions that learners will be able to make that they can’t make now. I reckon what’s going to help organizations is not what people know, but how they can apply that to problems to make better choices.  And we need SMEs who can articulate that. Which isn’t all SMEs!

That  also means that we need models. Information that helps guide learners’ performance while they compile away their expertise. Conceptual  models  are the key here; causal relationships that can explain what  did  happen or predict what  will happen, so we can choose the outcomes we want. And again, not all SMEs may be able to do  this part.

There’s also other useful information SMEs can give us. For one, they can tell us where learners go wrong. Typically, those errors aren’t random, but come from bringing in the wrong model.  It would make sense if you’re not fully on top of the learning.  And, again we may need more than one SME, as sometimes the theoretical expert (the one who can give us models and/or decisions) isn’t as in tune with what happens in the field, and we may need the supervisor of those performers.

Then, of course, there are the war stories. We need examples of wins (and losses).  Ideally, compelling ones (or we may have to exaggerate). They should  be (or end up) in the form of stories, to facilitate processing (our brains are wired to parse stories).  Of course, after we’re done they should refer to the models, and show the underlying thinking, but that may be our role (and if that’s hard, maybe we either have the wrong story or the wrong model).

Finally, there’s one other way experts can assist us. They’ve found this topic interesting enough to spend the years necessary to  be the experts.  Find out why they find it so fascinating!  Then of course, bake that in.

And it makes sense to gather the information from experts in this order. However, for learning, this information plays roles in different places.  To flip it around, our:

  • introductions need to manifest that intrinsic interest (what will the learners be able to do  that they care about?)
  • concepts need to be presenting those models
  • examples need to capture those stories
  • practice need to embed the decisions and
  • practice needs to provide opportunities to exhibit those misconceptions  before they matter
  • closing may also reference the intrinsic experience in closing the emotional experience

That’s the way I look at it.  Does this make sense to you? What am I missing?

 

 

Plagiarism and ethics

17 April 2018 by Clark 1 Comment

I recently wrote on the ethics of L&D, and I note that I  didn’t  address one issue. Yet, it’s very clear that it’s still a problem. In short, I’m talking about plagiarism and attribution.  And it needs to be said.

In that article, I  did say:

That means we practice ethically, and responsibly. We want to collectively determine what that means, and act accordingly. And we must call out bad practices or beliefs.

So let me talk about one bad practice: taking or using other people’s stuff without attribution.  Most of the speakers I know can cite instances when they’ve seen their ideas (diagrams, quotes, etc) put up by others without pointing back to them.  There’s a distinction between citing something many people are talking about (innovation, microlearning, what have you) with your own interpretation, and literally taking someone’s ideas and selling them as your own.

One of our colleagues recently let me know his tools had been used by folks to earn money without any reimbursement to him (or even attribution).  Others have had their diagrams purloined and used in presentations.  One colleague found pretty much his entire presentation used by someone else!  I myself have seen my writing appear elsewhere without a link back to me, and I’m not the only one.

Many folks bother to put copyright signs on their images, but I’ve stopped because it’s too easy to edit out if you’re halfway proficient with a decent graphics package.  And you can do all sorts of things to try to protect your decks, writing, etc, but ultimately it’s very hard to protect, let alone discover that it’s happening. Who knows how many copies of someone’s images have ended up in a business presentation inside a firm!  People have asked, from time to time, and I have pretty much always agreed (and I’m grateful when they do ask). Others, I’m sure, are doing it anyway.

This isn’t the same as asking someone to work for free, which is also pretty rude. There are folks who will work for ‘exposure’, because they’re building a brand, but it’s somewhat unfair. The worst are those who charge for things, like attendance or membership, or organizations who make money, yet expect free presentations!  “Oh, you could get some business from this.”  The operative word is ‘could’.  Yet they  are!

Attribution isn’t ‘name dropping‘. It’s showing you are paying attention, and know the giants whose shoulders you stand on.  Taking other people’s work and claiming it as your own, particularly if you profit by it, is theft. Pure and simple.  It happens, but we need to call it out.  Calling it out can even be valuable; I once complained and ended up with a good connection (and an apology).

Please, please, ask for permission, call out folks who you see  are plagiarizing, and generally act in proper ways. I’m sure  you are, but overall some awareness raising still needs to happen.  Heck, I know we see amazing instances in people’s resumes and speeches of it, but it’s still not right.  The people in L&D I’ve found to be generally warm and helpful (not surprisingly). A few bad apples isn’t surprising, but we can do better. All I can do is ask you to do the right thing yourself, and call out bad behavior when you do see it.  Thanks!

 

Chief Cognitive Officer?

13 February 2018 by Clark Leave a Comment

Businesses are composed of core functions, and they optimize them to succeed. In areas like finance, operations, and information technology, they prioritize investments, and look for continual improvement. But, with the shift in the competitive landscape, there‘s a gap that’s being missed. And I‘m wondering if a focus on cognitive science needs to be foregrounded.

In the old days, most people were cogs in the machine. They weren‘t counted on to be thinking, but instead a few were thinking for the many. And those who could do so were selected on that basis. But that world is gone.

Increasingly, anything that can be automated should be automated.   The differentiators for organizations are no longer on the execution of the obvious, but instead the new advantage is the ability to outthink the competition. Innovation is the new watchword.   People are becoming the competitive advantage.

However, most organizations aren‘t working in alignment with this new reality. Despite mantras like ‘human capital management’ or ‘talent development’, too many practices are in play that are contrary to what‘s known about getting the best from people. Outdated views like putting information into the head, squelching discussion, and avoiding mistakes are rife. And the solutions we apply are simplistic.

Ok, so neuroscientist John Medina  says our understanding of the brain is ‘childlike‘.   Regardless, we have considerable empirical evidence and conceptual frameworks that give us excellent advice about things like distributed, situated, and social cognition. We know about our mistakes in reasoning, and approaches to avoid making mistakes. Yet we‘re not seeing these in practice!

What I‘m suggesting is a new focus.   A new area of expertise to complement technology, business nous, financial smarts, and more.   That area is cognitive expertise. Here I’m talking about someone with organizational responsibility, and authority, to work on aligning practices and processes with what‘s known about how we think, work, and learn. A colleague suggested that L&D might make more sense in operations than in HR, but this goes further. And, I suggest, is the natural culmination of that thought.

So I‘m calling for a Chief Cognitive Officer. Someone who‘s responsibility ranges from aligning tools (read: UI/UX) with how we work, through designing continual learning experiences, to leveraging collective intelligence to support innovation and informal learning.   Doing these effectively are all linked to an understanding of how our brains operate, and having it distributed isn‘t working.  The other problem is that not having it coordinated means it‘s idiosyncratic at best.

One problem is that there‘s too little of cognitive awareness anywhere in the organization.  Where does it belong?  The people closest are (or should be) the L&D (P&D) people.  If not, what’s their role going to be?  Someone needs to own this.

Digital transformation is needed, but to do so without understanding the other half of the equation is sort of like using AI on top of bad data; you still get bad outcomes.  It’s time to do better. It’s a radical reorg, but is it a necessary change?  Obviously, I think it is. What do you think?

At the edge

31 January 2018 by Clark 4 Comments

Revolutionize book coverAnother response to my request for topics asked about moving from the classroom to the ‘fringe’.  Here, I have a very simple response: the case studies in Revolutionize Learning & Development. Each was chosen and structured to talk about the context, specific situation, the plan, the results, and advice.  Each also represents a diversity of settings and needs.  These represent some folks working at the edge, away from the ‘event’.

Mark Britz, facing more experts than novices, structured his corporate university as a network, not a series of courses.  Communities of Practice served as a model for this thinking.  This included and Enterprise Social Network and a Knowledge Management system.

Jos Arets and Vivian Heijnin at Tulser talked through a case study working with a medical care organization.  The problem was too much hierarchy. Using a Human Performance Improvement approach, they decentralized the work to more self-directed teams.  The solution includes continuous assessment, mobile performance support, and coaching.

Coaching also played a role in the case study Jane Bozarth provided.  The issue was solving workplace problems. Instead of courses, the solution connected those with demonstrable skills to mentor those who could benefit.  A ‘yellow pages’ to find ‘in the moment’ help was also a part.

For an internal self-help solution, Allison Anderson developed a community of practice with events, portal, and a networking platform. Here, the issues was getting disparate groups performing similar functions (L&D) to share best principles.

I had Charles Jennings recount his actions while serving as CLO in a global organization. With a mantra of ‘from event to process’, he used the 70:20:10 framework to rethink a balance of ‘push’ and ‘pull’ services.

In the book, they tell the stories in their own words. They unpack the thinking behind their choices, ‘showing their work’.  The contributions are very valuable, and I’m very grateful that they agreed to share them.  For that matter, you should find and track these folks!

Each of these were chosen as exemplary of the type of thinking that takes us from the old model to the ‘edge’. We want to be looking holistically at how people think, work, and learn, and aligning our infrastructure (policies, technology, procedures, and culture) accordingly.  This is the L&D part of a larger push to make the workplace more effective by making it more humane (read: more aligned with  us).

 

Higher Ed & Job Skills?

13 December 2017 by Clark 2 Comments

I sat in on a twitter chat yesterday, #DLNChat, that is a higher ed tech focused group (run by EdSurge). The topic was the link between higher ed and job skills, and I was a wee bit cynical. While I think there are great possibilities, the current state of the art leaves a lot to be desired.

So, I currently don’t think higher ed does a good job of preparation for success in business. Higher ed focuses too much on knowledge, and uses assignments that don’t resemble the job activities.  Frankly, there aren’t too many essays in most jobs!

Worse, I don’t think higher ed does a good job of developing meta-cognitive and meta-learning skills. There is little attempt to bridge assignments  across courses, so your presentations in psychology 101 and sociology 202 and business 303 aren’t steadily tracked and developed. Similarly with research projects, or strategy, or… And there’re precious little (read: none) typically found where you actually make decisions like you would need to.

And, sadly, the use of technology isn’t well stipulated either. You might use a presentation tool, a writing tool, or a spreadsheet, maybe even collaboratively, but it’s not typically tied to external resources and data.

Yes, I know there are exceptions, and it may be changing somewhat, but it still appears to be the case. Research, write a paper, take a test.

Yet the role of developing higher skills is possible and valuable.  We could be providing more meaningful assignments, integrating meta-learning layers, and developing both meaningful skills and meta-skills.

This doesn’t have to be done at the expense of the types of things professors believe are important, but just with a useful twist in the way the knowledge is applied. It might lead to a revision of the curriculum, at least somewhat, but I reckon it’d likely be for the better ;).

Our education system, both K12 and higher-ed, isn’t doing near what it could, and should. As Roger Schank says, only two things wrong: what we teach, and how we teach it.  We can do better. Will we?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.