Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Learning from Experimentation

5 February 2019 by Clark 3 Comments

At the recent LearnTec conference, I was on a panel with my ITA colleagues, Jane Hart, Harold Jarche, and Charles Jennings. We were talking about how to lift the game of Modern Workplace Learning, and each had staked out a position, from human performance consulting to social/informal. Mine (of course :) was at the far end, innovation.  Jane talked about how you had to walk the walk: working out loud, personal learning, coaching, etc.  It triggered a thought for me about innovating, and that meant experimentation. And it also occurred to me that it led to learning as well, and drove you to find new content. Of course I diagrammed the relationship in a quick sketch. I’ve re-rendered it here to talk about how learning from experimentation is also a critical component of workplace learning.

Increasing experimentation and even more learnings based upon contentThe starting point is experimentation.  I put in ‘now’, because that’s of course when you start. Experimentation means deciding to try new things, but not just  any things.  They should be things that would have a likelihood of improving outcomes if they work. The goal is ‘smart’ experiments, ones that are appropriate for the audience, build upon existing work, and are buttressed by principle. They may or may not be things that have worked elsewhere, but if so, they should have good outcomes (or, more unlikely, didn’t but have a environmentally-sound reason to work for you).

Failure  has to be ok.  Some experiments should not work. In fact, a failure rate above zero is important, perhaps as much as 60%!  If you can’t fail, you’re not really experimenting, and the psychological safety isn’t there along with the accountability.  You learn from failures as well as from successes, so it’s important to expect them. In fact, celebrate the lesson learned, regardless of success!

The reflections from this experimentation take some thought as well. You should have designed the experiments to answer a question, and the experimental design should have been appropriate (an A-B study, or comparing to baseline, or…).  Thus, the lesson extracted from learning from experimentation is quickly discerned. You also need to have time to extract the lesson! The learnings here move the organization forward. Experimentation is the bedrock of a learning organization,  if you consolidate the learnings. One of the key elements of Jane’s point, and others, was that you need to develop this practice of experimentation for your team. Then, when understood and underway, you can start expanding. First with willing (ideally, eager) partners, and then more broadly.

Not wanting to minimize, nor overly emphasize, the role of ‘content’, I put it in as well. The point is that in doing the experimentation, you’re likely to be driven to do some research. It could be papers, articles, blog posts, videos, podcasts, webinars, what have you. Your circumstances and interests and… who knows, maybe even courses!  It includes social interactions as well. The point is that it’s part of the learning.

What’s  not in the diagram, but is important, is sharing the learnings. First, of course, is sharing within the organization. You may have a community of practice or a mailing list that is appropriate.  That builds the culture. After that, there’s beyond the org.  If they’re proprietary, naturally you can’t. However, consider sharing an anonymized version in a local chapter meeting and/or if it’s significant enough or you get good enough feedback, go out to the field. Present at a conference, for instance!

Experimentation is critical to innovation. And innovation takes a learning organization. This includes a culture where mistakes are expected, there’s time for reflection, practices for experimentation are developed, and more.  Yet the benefits to create an agile organization are essential.  Experimentation needs to be part of your toolkit.  So get to it!

 

Skating to where L&D needs to be

30 January 2019 by Clark 3 Comments

“I skate to where the puck is going to be, not where it has been.” – Wayne Gretsky

This quote, over-used to the point of being a cliché, is still relevant. I was just reading Simon Terry’s amusing and insightful  post on ‘best practices’ (against them, of course), and it reminded me of this phrase. He said “Best practices are often racing to where someone used to be”, and that’s critical. And I’ve argued against best practices, and I want to go further.

So he’s right that when we’re invoking best practices, we’re taking what someone’s already done, and trying to emulate it. He argues that they’ve already probably iterated in making it work,  in their org. Also, that by the time you do, they’ve moved on. They may even have abandoned it!  Which isn’t, directly, my complaint.

My argument against best practices is that they worked for them, but their situation’s different. The practice may be antithetical to your culture. And thinking that you can just graft it on is broken. Which is kind of Simon’s point to.    And he’s right that if you do get it working, you find that the time it hass taken means it’s already out of date.

So my suggestion has been to look to best principles:  why  did it work?  Abstract out the underlying principle, and figure out how (or even whether) to instantiate that in your own organization.  You’d want to identify a gap in your way of working, search through possible principles, identify one that matches, and work to implement it.  That makes more sense.  And, of course, it should be a fix that even if it takes time, will be meaningful.

But now I want to go further. I argue for comprehending the affordances of new technology to leapfrog the stage of replicating what was done in the old. Here I’m making a similar sort of argument. What I want orgs to do is to define an optimal situation, and then work to that! Yes, I know it sounds like a fairytale, but I think it’s a defensible approach. Of course, your path there will differ from another’s (there’s no free lunch :), but if you can identify what a good state for your org would be, you can move to it. It involves incorporating many relevant principles in a coherent whole. Then you can strategize the path there from your current content.

The point is to figure out what the  right future is, and skate there, not back-filling the problems you currently have. Beyond individual principles to a coherent whole. Proactive instead of reactive. That seems to make sense to me. Of course, I realize the other old cliché, “when. you’re up to your ass in alligators”, but maybe it’s time to change the game a bit more fundamentally. Maybe you shouldn’t be in the swamp anyway?  I welcome your thoughts!

 

What to evaluate?

22 January 2019 by Clark 4 Comments

In a couple of articles, the notion that we should be measuring our impact on the business is called out. And being one who says just that, I feel obligated to respond.  So let’s get clear on what I’m saying and why.  It’s about what to evaluate, why, and possibly when.

So, in the original article, by my colleague Will Thalheimer, he calls the claim that we should focus on business impact ‘dangerous’!  To be fair (I know Will, and we had a comment exchange), he’s saying that there are important metrics we should be paying attention to about what we do and how we do it. And no argument!  Of course we have to be professional in what we do.  The claim isn’t that the business measure is  all we need to pay attention to. And he acknowledges that later. Further, he does say we need to avoid what he calls ‘vanity metrics’, just how efficient we are. And I think we  do need to look at efficiency, but only after we know we’re doing something worthwhile.

The second article is a bit more off kilter. It seems to ignore the value of business metrics all together. It talks about competencies and audience, but not impacting the business. Again, the author raises the importance of being professional, but still seems to be in the ‘if we do good design, it is good’, without seeming to even check to see if the design is addressing something real.

Why does this matter?  Partly because, empirically, what the profession measures are what Will called ‘vanity’ measures. I put it another way: they’re efficiency metrics. How much per seat per hour? How many people are served per L&D employee?  And what do we compare these to?  Industry benchmarks. And I’m not saying these aren’t important, ultimately. Yes, we should be frugal with our resources. We even should ultimately ensure that the cost to improve isn’t more than the problem costs!  But…

The big problem is that we’ve no idea if that butt in that seat for that hour is doing any good for the org.  We don’t know if the competency is a gap that means the org isn’t succeeding!  I’m saying we need to focus on the business imperatives because we  aren’t!

And then, yes, let’s focus on whether our learning interventions are good. Do we have the best practice, the least amount of content and it’s good, etc. Then we can ask if we’re efficient. But if we only measure efficiency, we end up taking PDFs and PPTs and throwing them up on the screen. If we’re lucky, with a quiz. And this is  not going to have an impact.

So I’m advocating the focus on business metrics because that’s part of a performance consulting process to create meaningful impacts. Not in lieu of the stuff Will and the other author are advocating, but in addition. It’s all too easy to worry about good design, and miss that there’s no meaningful impact.

Our business partners will not be impressed if we’re designing efficient, and even effective learning, if it isn’t doing  anything.  Our solutions need to be  targeted at a real problem and address it. That’s why I’ll continue to say things like “As a discipline, we must look at the metrics that really matter… not to us but to the business we serve.”  Then we also need to be professional. Will’s right that we don’t do enough to assure our effectiveness, and only focus on efficiency. But it takes it all, impact + effectiveness + efficiency, and I think it’s dangerous to say otherwise.  So what say you?

Redesigning Learning Design

16 January 2019 by Clark 2 Comments

Of late, a lot of my work has been designing learning design. Helping orgs transition their existing design processes to ones that will actually have an impact. That is, someone’s got a learning design process, but they want to improve it. One idea, of course, is to replace it with some validated design process. Another approach, much less disruptive, is to find opportunities to fine tune the design. The idea is to find the minimal set of changes that will yield the maximal benefit. So what are the likely inflection points?  Where am I finding those spots for redesigning?  It’s about good learning.

Starting at the top, one place where organizations go wrong right off the bat is the initial analysis for a course. There’s the ‘give us a course on this’, but even if there’s a decent analysis the process can go awry. Side-stepping the big issue of performance consulting (do a reality check: is this truly a case for a course), we get into working to create the objectives. It’s about how you work with SMEs. Understanding what they can,  and can’t, do well means you have the opportunity to ensure that you get the right objectives to design to.

From there, the most meaningful and valuable step is to focus on the practice. What are you having learners  do, and how can you change that?  Helping your designers switch to good  assessment writing is going to be useful. It’s nuanced, so the questions don’t  seem that different from typical ones, but they’re much more focused for success.

Of course, to support good application of the content to develop abilities, you need the right content!  Again, getting designers to understand what the nuances of useful examples from just stories isn’t hard but rarely done. Similarly knowing why you want  models and not just presentations about the concept isn’t fully realized.

Of course, making it an emotionally compelling experience has learning impact as well. Yet too often we see the elements just juxtaposed instead of integrated. There  are systematic ways to align the engagement and the learning, but they’re not understood.

A final note is knowing when to have someone work alone, and when some collaboration will help.  It’s not a lot, but unless it happens at the right time (or happens at all) can have a valuable contribution to the quality of the outcome.

I’ve provided many resources about better learning design, from my 7 step program white paper  to  my deeper elearning series for Learnnovators.  And I’ve a white paper about redesigning as well. And, of course, if you’re interested in doing this organizationally, I’d welcome hearing from you!

One other resource will be my upcoming workshop at the Learning Solutions conference on March 25 in Orlando, where we’ll spend a day working on learning experience design, integrating engagement and learning science.  Of course, you’ll be responsible for taking the learnings back to your learning process, but you’ll have the ammunition for redesigning.  I’d welcome seeing you there!

Thinking Strategically

12 December 2018 by Clark Leave a Comment

Repurposed from another use.

In today‘s increasing rate of change and competition, coupled with growing ambiguity and uncertainty, L&D just can‘t be about delivering courses on demand. Optimal execution, the result of formal learning, is only the cost of entry, and continual innovation will be the necessary element for organizations to thrive. Organizations have to move faster, be more agile, and adapt more effectively.   And it‘s here that L&D has a true opportunity, and imperative, to contribute. It’s about thinking strategically.

That means, intrinsically, that L&D has to start thinking about how to move forward..   People are learning on their own more and more. The tools to access information are quite literally in the palm of their hands.   L&D can no longer be about controlling content.   Instead a new role is needed.

Rethinking Formal

How does L&D cope? The answer involves a couple of major shifts, from familiar to challenging. The first is that courses go from an event model to an approach that better reflects how we actually learn. We need to have spaced, distributed practice to truly master our skills.   This is harder than the ‘information dump and knowledge test‘ that too often characterizes organizational learning, which brings up two issues: 1) formal learning should be reserved for when it absolutely, positively has to be in the head, and 2) putting information in the world when possible.  

That latter is referring to performance support, the first step in broadening the L&D perspective. The point is that we too often use courses when cognitive skills are not the problem. Performance consulting is a process to identify the real problem and cause, and provide appropriate solutions. Performance support is often a solution we can use instead of a course!   Note that this is a first step out of the comfort zone, as it means engaging with our stakeholders, the business units we are tasked to assist. But it‘s past time!

Beyond Formal

Doing courses the right way, coupled with performance support, are the key to optimizing execution. But that‘s just the starting point.   The key to organizational improvement will be the ability to learn. And that should be L&D‘s role.   But this means we have to again step out of our comfort zone.  

We need to branch out into informal and social learning.   Employees do learn on their own, but the evidence suggests that they‘re not particularly good at it. There are lots of folk stories about what works that just aren‘t aligned with what science tells us!   Assisting the individuals and the organization to learn, independently and collectively, is the new opportunity. Assisting the organization to innovate means moving to the core of competitive advantage. And that‘s a valuable place to be.

Wishful thinking isn’t the answer. It takes both knowing the bigger picture, the performance ecosystem, and working strategically to get from here to there. That‘s what‘s on the table. It might be scary, but the opportunity offers a brighter future for L&D.   I‘m excited about the prospects, and hope you’ll be making the move.  I’d welcome the opportunity to assist, as well.

Learning Experience Portals?

11 December 2018 by Clark Leave a Comment

What is a learning experience platform?  Suddenly the phrase seems ubiquitous, but what does it mean?  It’s been on my mental ‘todo’ list for a while, but I finally spent some time investigating the concept. And what I found as the underlying concept mostly makes sense, but I have some challenges with the label.  So what am I talking about?

It’s ImPortal!

Some background: when I talk about the performance ecosystem, it’s not only about performance support and resources, but finding them.  Ie, it includes  the need for a portal. When I ask audiences “how many of you have portals in your org”, everyone raises their hands. What also emerges is that they have  bunches of them. Of course, they’re organized by the business unit offering them. HR, product, sales, they all have their own portals. Which doesn’t make sense. What does make sense is to have a place to go for thing organized by people’s roles and membership in different groups.

A user-centered way of organizing portals makes sense then. People need to be able to see relevant resources in a good default organization, have the ability to reorganize to a different default, and  search.  Federate the portal and search over all the sources of resources, not some subset.  I’ve suggested that it might make sense to have a system on top of the portals that pulls them together in a user-centric way.

An additional issue is that the contents of said portal should be open, in the sense that all users should be able to contribute their curated or created resources, and the resources can be in any format: video, audio, document, even interactive. In today’s era of increasing speed of change and decreasing resources for meeting the learning needs, L&D can no longer try to own everything. If you create a good culture, the system will be self-policing.

And, of course, the resources aren’t all about learning. Performance support is perfectly acceptable. The in-the-moment video is as needed as is the course on a new skill. Anything people want, whether learning resources from a library to that quick checklist should be supported.

The Learning Experience Platform(?)

As I looked into Learning Experience Platforms (LXP), (underneath all the hype) I found that they’re really portals; ways for content to be aggregated and made available. There are other possible features – libraries, AI-assistance, paths, assessments, spaced delivery – but at core they’re portals. The general claim is that they augment an LMS, not replace it. And I buy that.

The hype  is a concern: microlearning for instance (in one article that referred to the afore-mentioned in-the-moment video, glossing over that you may learn nothing from it and have to access it again). And of course exaggerated claims about who does what.  It appears several LMS companies are now calling themselves LXPs. I’ll suggest that you want such a tool designed to be a portal, not having it grafted onto to another fundamental raison-d’être. Similarly, many also claim to be social. Ratings would be a good thing, but also trying to be a social media platform would not.

Ultimately, such a capability is good. However, if I’m right, I think Learning Experience Platform isn’t the right term, really they’re portals. Both learning  and experience are wrong; they can be perform in the moment, and generally they’re about access, not generating experiences. And I could be wrong.

Take-home?

Ecosystems should be integrated from best-of-breed capabilities. One all-singing, all-dancing platform is likely to be wrong in at least one if not more of the subsidiary areas,  and you’re locked in.  I think a portal is a necessary component, and the LXPs have many performance & development  advantages for over generic portal tools.

So I laud their existence, but I question their branding. My recommendation is  always to dig beneath the label, and find the underlying concept. For instance, each of the concepts underpinning the term microlearning is valuable, but the aggregation is problematic. Confusion is an opening for error. So too with LXP: don’t get it confused with learning or creating experiences.  But do look to the genre for advanced portals.  At least, that’s my take: what’s yours?

Experimentation specifics

5 December 2018 by Clark Leave a Comment

I’m obviously a fan of innovation, and experimentation is a big component of innovation. However, I fear I haven’t really talked about the specifics.  The details matter, because there are smart, and silly, ways to experiment. I thought I’d take a stab at laying out the specifics of experimentation.

First, you have to know what question you’re trying to answer. Should we use a comic or a video for this example?  Should we use the content management system or our portal tool to host our learning and performance support resources?  What’s the best mechanism for spacing out learning?

An important accompanying question is “how will we know what the answer is?”  What data will discriminate?  You need to be looking for a way to tell, we know, we can’t know, or we need to revise and do again.

Another way to think about this is: “what will we do differently if we find this?” and “what will we do differently if it turns out differently?” The point is to know not just what you’ll know, but  what it means.

You want to avoid random experimentation. There  are the ‘lets try it out’ pilots that are exploratory, but you still want to know what question your answering. Is it “what does it take to do VR” or “let’s try using our social media platform to ‘show our work'”.

Then you need to design the experiment. What’s the scope? How will you run it? How will you collect data? Who are your subjects?  How will you control for problems?

One of the claims has regularly been “don’t collect any data you don’t know what you’ll do with”.  These days, you can run exploratory data analysis, but still, accumulating unused data may be a mistake.

The after-experiment steps are also important. Major questions include: “what did we learn”, “do we trust the results”, and “what will we do as a result”. Then you can followup with the actions you determined up front that would be predicated on the outcomes you discover.

Experimentation is a necessary component of growth. You have to have a mindset that you learn from the experiment, regardless of outcome. You should have a budget for experimentation  and expect a degree of failure. It’s ok to lose, if you don’t lose the lesson!  And share your learnings so others don’t have to make the same experiment.  So experiment, just like I did here; is this helpful?  If not, what would it need to be useful?

Citations

28 November 2018 by Clark 2 Comments

Following on my thoughts on writing yesterday, this was a topic that didn’t fit (the post got too long ;).  So here we go..  Colleagues have written that citations are important. If you’re making a claim, you should be able to back it up. On the other hand, if you’re citing what you think is ‘received wisdom’, do you need to bother?  Pondering…

Now, citations can interfere with the flow, I believe. If not the reading, they can interfere with the flow of my writing! (And, I’ve been accused of ‘name dropping‘, where instead I believe it’s important to both acknowledge prior work and show that you know what’s been done.) Still, it’s important to know what to cite, and when.

I admit that I don’t always cite the claims I make. Because, I take it as a given.  I may say something like “we know” or otherwise presume that what I’m saying is accepted premise. One problem, of course, is that I don’t know what others know (and don’t). And, of course, that this isn’t an official article source, this is my blog ;). Still, when I’m talking about something new to me (like thoughts from books), I will cite the locus.

Articles are different. When I write those, I try to provide sources. In both cases I generally don’t go to the extent of journal article links, because I’m not expect that folks have easy access to them, and so prefer to cite more commonly available resources, like books that have ‘digested’ the research.

And when I write ‘take down’ articles, I don’t cite the offender. It’s to make the point, not shame anyone. If you’re really curious, I’m sure you can track it down.

And, realize I don’t have easy access to journals either. Not affiliated with an institution, I don’t have access to the original articles behind a pay wall. I tend to depend on people who summarize including books and articles that summarize. Still, I’ve a grounding for over a decade in the original materials and am able to make inferences. And of course occasionally I’ll be wrong. Sometimes, I’ll even admit it ;).

The issue really is when do you need to make a citation. And I reckon it’s when you’re stating something that folks might disagree with. And I can’t always anticipate it. So I’ll try to consistently point to the basis for any claims I think might be arguable, or state that it’s my (NSH :) opinion.  And you can always ask!  Fair enough?

Trends for 2019?

21 November 2018 by Clark 3 Comments

It’s already started!  Like Christmas (which morally shouldn’t be even be thought about before Thanksgiving), requests for next year’s trends should be on hold until at least December.  Still, a request came in for my thoughts. Rather than send them off and await their emergence; I toss them out here, with a caveat: “It’s tough to make predictions, particularly about the future.”

1. What, on your opinion, are the main Digital Learning (DL) trends for 2019?

I think the main trend will be an increasing exploration of alternatives to ‘courses’.  This will include performance support, and social networks. Similarly, models for formal learning will shift from the ‘event’ model to a more sustained and distributed framework that segues from spaced learning through coaching.

I sincerely hope that we’ll be paying more attention to aligning learning with cognition, and pursue ‘shiny objects‘ only  after we establish a solid foundation. Instead of looking for the magic bullet, we’ll recognize that our brain architecture means we need a drip-irrigation model, not a flood.

This may be wishful thinking, but I believe we’re beginning to see some positive signs. We’re seeing more interest in  learning science, growing awareness of  myths, and more. Hopefully there’s an accompanying shift from being fascinated by technology to being interested in what technology can  do for better learning outcomes!

2. What are the main threats and obstacles, then?

The main threats and obstacles are several. For one, our own lack of understanding of the foundations of our industry hampers us. When we don’t really understand learning, we can be swayed by well-designed distractors.  That’s the second factor: there are those who are happy selling us the latest fad.

Coupled with this is a lack of business awareness in our own practices. We measure the wrong things, e.g. efficiency – such as cost/seat/hour. And we’re reluctant to talk to the stakeholders in the business. We should be worried about impact: are we reducing costs, increasing profitability or customer satisfaction?

Overall, we’re hampered by a true lack of professionalism. We learn the tools, and crank stuff out, but we’re not concerned enough about whether it’s demonstrably the  right  stuff.

3. Do you believe in the AI and DL robotization? When does this bright future come?

I believe in increasing use of AI to support functions that shouldn’t involve humans. It’s silly to have people  doing rote things we can teach computers to do. That includes responding to knowledge requests, and filtering, and a few other tasks.  However, I think we need to recognize that not all the things needed in learning, such as evaluating complex work products,  should be done by machines. I think we should look for when we can automate, and when we want people in the loop.

So I’m more interested in IA (not AI): Intelligence Augmentation.  That is, what is the  right distribution of tasks between machine and people?  There are things that computers do well, but they’re remarkably brittle; as of yet they don’t handle edge cases, or make good inferences in the grey areas very well. That’s when you want people. I think our design discipline needs to be smart about when to use each, and how they complement each other.

The future of IA is already underway, as is AI. We’re seeing, and will see more, uses of AI to filter, to answer questions, and to take over rote tasks.  These behaviors are not yet ready to be termed ‘bright’, however. Some success stories are emerging, but I suspect we don’t hear much yet about the money being wasted.  The time of consistency in effective synergies is still a few years off.

4. Your advice to the market for 2019.

Work smarter!  Get smart about learning science, about business, and about what technology can (and can’t) do. I’d like to see: staff pushing more for real impact via metrics, leaders asking for business cases not order taking, vendors pushing solutions not resource savings, and buyers asking for real evidence.  I’d like to see smarter purchasing, and the snake oil sales folks’ business withering away.  We can do better!

And I realize that my proposed trends are more wishful thinking than predictions. One of my favorite quotes is by Alan Kay: “The best way to predict the future is to invent it” so I keep pushing this agenda. My goal is simple: to make this a field I’m truly  proud to be working in. The folks in L&D, I think, are some of the nicest folks; they’re here because they  want to help others (you don’t go to L&D to become rich ;). I think there’s a promising future, but it doesn’t start with AI or ML or DL, it starts with getting down to the realities of how we learn, and how we can support it. When we do that, I think our future  will be one which will help our organizations and our people thrive. Our future  can be bright, and it’s up to us to make it so.

Content Confusion

14 November 2018 by Clark 1 Comment

I read, again and again, about the importance of ‘content’ in learning. And I don’t disagree, but…I think there’s still a problem about it. And where I get concerned is about what is meant by the term.  Just what do mean by ‘content’?  And why should we care about the distinction?

My worry is twofold. For one, I get concerned that talking about content foregrounds ‘information’. And that’s a problem. I’ve been concerned for a while about how it’s too easy to allow knowledge to dominate learning objectives. Know, understand, etc are generally not meaningful objectives. Objectives should be ‘able to use to ___’.  Talking about content, as I’ve talked about before, leads us down a slippery slope to curriculum being defined as content.

My second concern is related. It’s about content being meant to include concepts, examples, and  practice.  Yet, if we don’t separate out interactivity separate from consumption, we can make nonsensical learning interactions instead of meaningful applications of concepts to contexts. Recognition is  not powerful learning.

Look, I get it.  From a technical point of view, e.g. a content management system perspective, it’s all content. It’s addressable files. They may just report access, or they can report success/failure, or many other things. However, again, this view can make it easy to do bad things. And, as the book Nudge I just read suggests, we want to make it easy to do the right things, and make you have to work to do things inappropriately.

So I may be being a pedant about this, but I have a reason. It won’t be when we’re all on the same page about good learning design, practice foregrounded and concepts and examples as learning resources, not the goal. But I don’t think we’re there yet.  And language matters in shaping thinking. It may not be the Whorfian Hypothesis, but it does influence how we think and what we do.

For principled  and practical reasons, I think we want to distinguish between content (concepts and examples) and interactives (practice).  At least as designers. Others can focus differently, but we have our own language for other things (I’d argue our use of the term ‘objectives’ is different than business folks, for example), and I argue we should do so here as well.  What say you?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.