Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Beneath the surface

15 August 2023 by Clark 1 Comment

I just finished up teaching my six week workshop on the missing LXD (where we unpack nuances), when I received a message from a colleague. In it, she recited how she’s being pushed on video length. It struck me that what was missing was a finer focus, and it drove me back to previous writings. What I replied is that people focusing on video length are missing the point. I think that it’s yet another case where you need to go beneath the surface level issues. Or, as I’ve said before, details matter!

I’ve railed, e.g. in my book on myths, that our attention span hasn’t dropped down to 8 seconds. And, despite a newer book based upon research that suggests our attention span has dropped to 47 seconds, I think there’s more to it. For instance, attention is (largely; re: the cocktail party effect) volitional. We may be conditioned to be more open to being disturbed; certainly there are more and more effective distractions! Yet I don’t think our attention span capability has shifted (e.g. we don’t evolve that fast), but perhaps our intents may have changed.

For instance, we still can surface from involvement in a movie/book/game and note “how’d it get so late?” So it’s a matter of what we want or intend to attend to. In cognitive science, we separate out conation, intent or motivation (see also Self Determination Theory), that is whether we are willing to expend effort towards something. We have to have a clear reason for someone’s attention, that they accept. Then, we have to maintain it.

There is research (PDF) that suggests that video attention flags after 6 minutes. However, that’s in a particular context, and it may not be general. Again, think about attending to a movie for more than an hour! I think it helps to have a clear intent, and then maintain a commitment to it. If you do, and the audience resonates, they will attend. There’re clear benefits to practicing asceticism, but as colleague JD Dillon once opined, videos should be as long as they need to be, not arbitrarily truncated.

In short, I think folks are focusing on the wrong issues. My point to my colleague was to focus first on the relevance and value of the video, not the length. That may suggest a trim, but it also may suggest more focus on the WIIFM, and maintaining motivation. In short, you’ve got to go beneath the surface and find the real issue. Nuances matter, and we can’t expect others to go into the depths we do, but they do have to let us do our jobs. Which means we have to know our stuff. Please, do!

Don’t use AI unsupervised!

8 August 2023 by Clark Leave a Comment

A recent post on LinkedIn dubbed me in. In it, the author was decrying a post by our platform host, which mentioned Learning Styles. The post, as with several others, asks experts to weigh in. Which, I’ll suggest, is a broken model. Here’s my take on why I say don’t use AI unsupervised.

As a beginning, learning styles isn’t a thing. We’ve instruments, which don’t stand up to psychometric scrutiny. Further, reliable research to evaluate whether they have a measurable impact comes up saying ‘no’. So, despite fervent (and misguided) support, folks shouldn’t promote learning styles as a basis to adapt to. Yet that’s exactly what the article was suggesting!

So, as I’ve mentioned previously, you can’t trust the output of an LLM. They’re designed to string together sentences of the most probabilistic thing to say next. Further, they’ve been trained, essentially, on the internet. Which entails all the guff as well as the good stuff. So what can come out of it’s ‘mouth’ has a problematically high likelihood of saying something that’s utter bugwash (technical term).

In this case, LinkedIn (shamefully) is having AI write articles, and then circulating them for expert feedback. To me that’s wrong for two reasons. Each is bad enough in it’s own right, but together they’re really inexcusable.

The first reason is that they’ve a problematically high likelihood of saying something that’s utter bugwash! That gets out there, without scrutiny, obviously. Which, to me, doesn’t reflect well on LinkedIn for being willing to publicly demonstrate that they don’t review what they provide. Their unwillingness to interfere with obvious scams is bad enough, but this really seems expedient at best.

Worse, they’re asking so-called ‘experts’ to comment on it. I’ve had several requests to comment, and when I review them, they aren’t suitable for comment. However, asking folks to do this, for free on their generated content, is really asking for free work. Sure, we comment on each other’s posts. That’s part of community, helping everyone learn. And folks are contributing (mostly) their best thoughts. Willing, also, to get corrections and learn. (Ok, there’s blatant marketing and scams, but what keeps us there is community.) But when the hosting platform generates it’s own post, in ways that aren’t scrutable, and then invites people to improve it, it’s not community, it’s exploitation.

Simply, you can’t trust the output of LLMs. In general, you shouldn’t trust the output of anything, including other people, without some vetting. Some folks have earned the right to be trusted for what they say, including my own personal list of research translators. Then,  you shouldn’t ask people to comment on unscrutinized work. Even your own, unless it’s the product of legitimate thought! (For instance, I usually reread my posts, but it is hopefully also clear it’s just me thinking out loud.)

So, please don’t use AI unsupervised, or at least until you’ve done testing. For instance, you might put policies and procedures into a system, but then test the answers across a suite of potential questions. You probably can’t anticipate them all, but you can do a representative sample. Similarly, don’t trust content or questions generated by AI. Maybe we’ll solve the problem of veracity and clarity, but we haven’t yet. We can do one or the other, but not both. So, don’t use AI unsupervised!

Not Working harder

2 August 2023 by Clark Leave a Comment

Seek > Sense > Share

A colleague recently suggested that I write about how I get so much done. Which is amusing to me, since I don’t think I get done much at all! Still, her point is that I turn around requests for posts the next day, generate webinars quickly, etc. So, I thought I’d talk a bit about how I work (at risk of revealing how much I, er, goof off). It’s all about not working harder! It may be that I’m not doing a lot compared to folks who work in more normal situations, but apparently at least perceived as productive.

So, as background, I have a passion for learning. I remember sitting on the floor, poring through the (diagrams in) the World Book. My folks reinforced this, in a story I think I’ve told about how the only excuse for being excused from the dinner table was looking things up. Actually, while I did well in school, it wasn’t perfect because I was learning to learn, not to do well in school. That was just a lucky side effect. I went on and got a Ph.D. in cognitive science, which I argue is the best foundation for dealing with folks. (Channeling my advisor.)

So, I’ve been lucky to have a good foundation. I do recall another story, which I may have also regaled you with. This is about my father’s friend who succeeded in a job despite having stated to the effect that if it appeared he was asleep, he was working, and he’d still do the work of two. (He did.) The point being, that taking time to learn and reflect was useful. I did the same, spending time reading magazines with my feet up on the desk in my first job out of college, but still producing good work.

That’s continued. Including through my graduate school career, academic life, workplace work, and as a consultant. The latter wasn’t my chosen approach, it was involuntary (despite appearing to be desirable). Somehow, it became a way of life. (And I’ve realized there are lots of things I wouldn’t have been able to do if I had had a real job).  What I do, regularly, are two major things which I think are key.

The first is that I continue to learn. I read (a lot). Partly it’s to stay up on the news in general, but also try to track what happens in our field. I check in on LinkedIn, largely through the folks I follow. I’ve tried to practice Harold Jarche’s PKM, as I understand it. That is, I update the folks I follow (on a variety of media), as well as media (for instance, Twitter is dwindling and I’m now more on Mastodon).

I also allow time for my thoughts to percolate. For instance, I take walks at least a couple of times a week. I can put a question or thought in my mind and head out. To capture thoughts, I use dictation in Apple’s Notes. I also read fiction and play games, to allow thoughts to ferment. (My preferred metaphor, you can also choose percolate or incubate. ;).  I even do household chores as a way to allow time to think. Basically, it looks like I’m spending a lot of time not working. Yet, this is critical to coming up with new ideas!

I also take time to organize my thoughts. Diagramming things is one way I understand them. I blog (like this), for the same reason. These are my personal processing mechanisms. When I do presentations and write articles for others, they’re the result of the time I’ve spent here. If you look at Harold’s process, I set up good feeds to ‘seek’ (and do searches as well), I process actively, through diagramming and posting, and then I share (er, through posting) and presentations and workshops and books and…

How models connect to context to make predictions.Note that it’s not about remembering rote things, but it’s about seeing how they connect. That takes time. And work. But it pays off. I’ll suggest that turning the ideas into models, connected causal stories, helps. So, it’s about understanding how things work, not just ‘knowing’ things. It’s about being able to predict and explain outcomes, not just to tout statistics and facts.

With this prep, I can put together ideas quickly. I’ve thought them through, so I have formed opinions. It’s then much easier to decide how to string them together for a particular goal. The list of things I’ve thought about continues to grow (even if I’ve forgotten some and joyfully rediscover!). I can write it out, or create a presentation, which are basically just linear paths through the connections.

How do I have time to do this? Well, I work from home, so that makes it easier. I also don’t work a regular job, and have gotten reasonably effective at using tools to get things done. For instance, I’m now using Apple’s Reminders to track ‘todos’, along with its Calendar. (I’m cheap, so I’ve used fancier tools, but have found these suffice.) Needless to say, I’m quite serious when I say “if a commitment I make doesn’t get into my device, we never had the conversation.”

Thus, it’s about working smarter. I don’t have an org, so it’s just my practices. If you saw it, you’d see that it’s bursts of productivity combined with lots of ‘down time’. That’s hard to see, as an org, yet that’s the way we work best. As we start having tools that automate more of our rote tasks, we should retain doing creative things like painting, music, and more, not relegate that to AI. Then we can start working more like the creative beings we are, and start recognizing that taking time out for the non-productive is actually more productive. That’s how we work smarter, and are not working harder.

Emotion is the new ID

25 July 2023 by Clark Leave a Comment

Ok, so the title’s a bit over the top, but…I think there’s something here. Everyone is now talking about how AI can take over a bunch of ID roles. Frankly, I agree (and have said so). In thinking about it (on a walk, as usual ;), I realized there’s a reframing, and I think it’s important. Despite being a tad flip, I do think emotion is the new ID.

So, there are things AI can’t do. It doesn’t really understand, basically. It can look at relationships and infer structure from good content! (That is, if there’s bad content, the inferences are also bad). We still need oversight, basically. So, one role will be to check AI output for accuracy. However, that’s something that largely comes from domain expertise. We’ve always needed subject matter experts to review output.

When I say AI doesn’t really understand, I mean more, however. It’s syntactically manipulating to generate semantics, but semantics is still largely cognitive. Yet as humans, we’re affective (personality) and conative (motivation) as well. In short, we’re emotional (not purely rational). Context matters. Meaning matters! We need to address these elements in our learning experiences.

Thus, I posit that it takes humans to write the introduction to learning experiences, to set the ‘hook‘. Similarly, it takes humans to make practice activities (aka assessment) that have an engaging context, appropriate challenge, and naturally embed the task. Essentially, making the practice meaningful. That’s something we, uniquely, can do.

When I wrote my book Make It Meaningful, I was explicitly addressing the fact that much of ID addresses the learning science alone (if even doing that). It was designed as a complement to my learning science book, to provide a complete LXD picture. What I didn’t expect was the advent of the LLM AIs. Yet, serendipitously (it seems to me, with the usual caveat ;), the latest book addresses the most important part of learning that AI can’t do now or  in the foreseeable future.

Look, I strongly believe that we don’t pay enough attention to engagement, and yet we can. (Note: I do not mean the trivial engagement approaches: tarted-up content presentation like ‘click to see more’, fancy production values, etc.) I run workshops online and face-to-face on this because it’s my passionate and informed belief, not because it’s going to make me rich (it won’t). It just so happens that with this advent, I think it’s even more true that emotion is the new ID. Fortunately, I think we do know how to do it. I think it gives us a role going forward; a way to answer the question: but what about AI? We just have to be prepared to respond. Are you?

Give us the right info!

18 July 2023 by Clark Leave a Comment

I’ve previously complained about the RFPs that orgs send out. And, having just reviewed one, I have to say that I stopped short. There’s more that a company should do if they want to get a good proposal. It has to do with the actual requirements for learning, not for the proposal. There’s always that stuff (what sections, what deadlines, etc). However, too often what’s given doesn’t give enough to actually propose a solution. I want orgs to give us the right info! What do I mean?

The proposal had the sections that were to be covered. With an objective for each. Which were ‘provide information’ and then a listing of content! Sure, we could say we’ll do a knowledge dump, but you and I both know that won’t lead to any meaningful change. We could do our own inferences, and of course we did. But why?

Why aren’t we getting:

  • performance objectives
  • misconceptions/ways they go wrong
  • models
  • stories & examples
  • etc?

These are the things that we need to scope a solution: to think of a pedagogical approach, to actually make something that works! It’s much easier to choose a pedagogical approach when you know what people have to do as an outcome. It affects scope, media, and more as well. All things that organizations want in responses, but they don’t give you enough to do with it. They list content, and time (!).

Sure, we’ll ask these questions through the mechanisms they provide, and it shows we know what we’re talking about. It’s not clear they do, however! It’ll help us lift our game as an industry, collectively, if orgs start to give us the right info to make good proposals. Until then, we’ll see orgs request and vendors respond with info dump courses. And we’ll continue to bore our learners and waste everyone’s time and money. Sigh.

Make Meaningful Practice

11 July 2023 by Clark Leave a Comment

Last week, I gave a webinar with the CEO of Upside Learning on microlearning. In the commentary, one of the attendees pointed to the research of Pooja Agarwal. Turns out she’s worked with Roediger (one of the authors of Make It Stick, a book on my list). In a paper I found there, I found justification there for an approach I’ve advocated. My point is that we should make meaningful practice. Which is something I think we don’t focus enough on, so let me elaborate.

So, I argue that even for rote knowledge, you should retrieve in context and apply it. That is, I believe strongly in how Van Merriënboer talks about the knowledge you need and the complex problems you apply it to. That is, the knowledge underpins the ability to determine an appropriate approach and execute.  However, checking to see whether you have the knowledge can be either typical knowledge test or retrieval in some meaningful way. I think the former is boring, but it did seem to align with what learning science would imply.

Fortunately, in that paper (PDF), however, she tested and found that while lower level testing lead to better lower-level recall, it didn’t impact higher-level problem-solving.  Even a combination of low- and high-level questions wasn’t noticeably better than just higher-level question practice. So, if you want the higher-level skills, you practice them and that’s what’s necessary. Such questions require you to know the lower-level material, but don’t seem to need fact-checks.

Which, for experience design, is great news. My book on engagement suggested more meaningful practice. (It’s really on learning experience design, as it’s a complement to my learning science book. The final chapter talks about a design process for integrating learning science with engagement. ) What I proposed was to make practice meaningful by  retrieving information in the context of applying it. This is the case whether it’s mini-scenarios, branching scenarios, or full games.

FYI, if you’re seeking a face-to-face workshop talking about engagement, I’ll point you to my upcoming one at DevLearn in Las Vegas on October 24. The focus is on elegantly integrating engagement, including how to make meaningful practice, It received top ratings across the board when I ran it last year, so I am confident it’s worth it. I’m running a related workshop online right now, but at times most appropriate for the Asia-Pacific region, but if you’re interested, you might check it out.

2023 ITA Jay Cross Memorial Award: Keeley Sorokti

5 July 2023 by Clark Leave a Comment

The Internet Time Alliance Memorial Award, in memory of Jay Cross, is presented to a workplace learning professional who has contributed in positive ways to the field of Informal Learning and is reflective of Jay‘s lifetime of work.

Recipients champion workplace and social learning practices inside their organization and/or on the wider stage. They share their work in public and often challenge conventional wisdom. The Award is given to professionals who continuously welcome challenges at the cutting edge of their expertise and are convincing and effective advocates of a humanistic approach to workplace learning and performance.

We announce the award on 5 July, Jay‘s birthday.

Following his death in November 2015, the partners of the Internet Time Alliance — Jane Hart, Charles Jennings, Clark Quinn, and Harold Jarche — resolved to continue Jay‘s work. Jay Cross was a deep thinker and a man of many talents, never resting on his past accomplishments, and this award is one way to keep pushing our professional fields and industries to find new and better ways to learn and work.

We introduce the winner of the 2023 ITA Jay Cross Memorial Award: Keeley Sorokti, (on the recommendation of a previous winner, 2018’s Mark Britz).

Keeley Sorokti’s career as a knowledge management professional has been marked by her expertise in guiding organizations and teams through transformative journeys in designing and sustaining social learning, online community, and knowledge-sharing practices. Her impact can be seen in her work with multiple technology, non-profit, and higher education organizations, where she has improved knowledge creation and sharing, cross-boundary connections, collaboration, and learning experiences. Currently serving as the Director of Knowledge and Collaboration at Sift, a Digital Trust & Safety late-stage technology startup, Keeley’s role involves co-designing solutions that place people at the center, fostering an open learning, knowledge sharing and collaboration culture across the organization.

In addition to her role at Sift, Keeley Sorokti’s influence extends beyond her workplace. She actively shares her expertise and insights. As an instructor, she co-teaches the Creating and Sharing Knowledge class in the Master of Science in Learning and Organizational Change (MSLOC) program at Northwestern University. She co-founded the Chicago Online Community Professionals peer-to-peer community of practice and coworking group where KM, L&D, online community, and digital workplace professionals from around the world support each other as they work to transform the way we work, learn, and share knowledge in our organizations.

Keeley has shown a commitment to advancing the field of workplace learning and her passion for working out loud and making work visible exemplifies her humanistic approach to learning and performance.

Browse Keeley’s articles and presentations: tinyurl.com/keeley-sorokti

From platitudes to pragmatics

4 July 2023 by Clark Leave a Comment

It’s easy to talk principle. (And I do. ;) Yet, there are pragmatics we have to deal with, as well. For instance, with ‘clients’ (internal or external), giving us their desired outcomes that are vague and unfocused. We generally don’t want to educate them about our business, yet we need more focused guidance. Particularly when it comes to designing meaningful practice. How, then, do we get from platitudes to pragmatics?

To be clear, what’s driving this is trying to create practice that will lead to actual outcomes. That’s, first, because our practice is the most tangible manifestation of the performance objectives.  Also, because it is also the biggest contributor to learning actually having an impact! We need good objectives to know what we’re targeting and then the next thing we need to do is design the practice. After we design practice, we can develop the associated content, etc. How do we get this focus?

I see several ways. Ideally, we can engage with clients in a productive conversation. We can do the advocated ‘yes and…’ approach, where we turn the conversation to the outcomes they’re looking for, and ideally even to metrics. E.g. “how will we know when we’ve succeeded?” When we hear “our sales cycle takes too long” or “our closure rate isn’t good enough” if the topic is sale, there’re metrics there. If we hear “too many errors in manufacturing” or “customer service ratings aren’t high enough”, that’s quantifiable, and we have a target.

There are other situations, however. We might not get metrics, so then we might have to infer them from the performance outcomes.  When we hear “we need sales training” or “we need to review the manufacturing process” or “we need a refresher on customer service”, it’s a bit vaguer.  We should try and dig in (“what part of sales isn’t up to scratch” or “what are customers complaining about”), but we may not always have the opportunity. Still, we can make practice assignments around these. We can provide practice around the specific associated tasks.

What really is the biggest problem is ‘awareness’ courses. “I just want folks to know this.” (Which begs the question: why?) I fear that part of the answer is a legacy belief that we’re formal logical reasoning beings and so new information will change our behavior. (NOT!) It can also be because the client just doesn’t know any better, nor have any greater insight than “if they know it, it is good”. However, I still think there’s something we can do here. Even if it’s a case of ‘easier to get forgiveness than permission’.

I think we can infer what people would do with the information. If they insist we need to be aware of harassment, or diversity, or… we can ask ourselves “what would folks do differently?” One decision is to intervene, or report, or ignore. Another might be where and how to do those things. In general, even though the requester isn’t aware, there’s something they actually expect people to do. We have to infer what that can be. Then, they can critique, but it’s more effective for the organization  and more engaging for the learner. That, to me, is a reasonable justification!

Whether it’s mapped to multiple choice questions (see Patti Shank’s seminal book on the topic), scenarios (Christy Tucker is one of our gurus), or full games (I have my own book on that ;), we need to give learners practice in dealing with the situations that use the information. I think we can work from platitudes to pragmatics, and should. What do you think?

Don’t just do!

27 June 2023 by Clark Leave a Comment

Look, doing is good. It’s better than not doing, for sure. When I say doing, by the way, I mean doing the things that need to be done. In your work, for instance. You should do your instructional design, your strategy. That’s all good. However, I want to suggest, it’s not enough. Don’t just do, do more! At least, if you want to continue to learn (and you should; let’s not talk about the alternative, but either you’re growing, or, well, you’re not).

What I’m talking about, here, is that just doing your job isn’t a bad thing, but you can and should do more. Most folks I talk to, at local chapter events and the like, want to go above. That’s why they’re there, after all. People say they want to learn and they want to get recognition. I’ve previously addressed that, talking about writing session descriptions. But there’s more.

I’ve also written about being an expert. Having a unique voice, a perspective, and sharing it. I think that’s important, too. However, there’s one more step I suggest that I don’t seem to have shared before. And that’s doing more.

First, of course, is taking advantage of opportunities to learn. I happen to know there are many free webinars. There are also talks that you can attend for a low fee. For more, you can attend online or in-person workshops. Then there are conferences. You likely will want to get your org to pay, but maybe even sometimes put in your own money. If you’ve a commute or other time, listen to podcasts, there are lot of those free too again I happen to know.

Read books; I work out my local library heavily, not just for fiction (which I devour), but also non-fiction. Interlibrary loan is a gift, use it if you can! Certain books are worth buying, creating a valuable library. I’ve got a shelf next to my desk that’s full of some of the best books known, so I can grab them to refer to certain things.

As you get your mind around the field, you’ll start seeing things in different ways. Not only will your work improve, but you’ll begin to find your own voice, a step on the way to expertise. Wrestle with things, and then share when they make sense. You’ll likely help others.

Then, do one further step. Don’t just attend the local chapter events, and conferences, contribute. Serve on a committee. There’s a lot to be learned in this way.  You’ll meet folks, get exposed to new ideas, and make it easier to go further. It’s a good stepping stone on the way to speaking, for one. It’s also a way to give back to those who’ve contributed.

Sure, you can just do your job. Exist. Consume and produce. But I think there’s more to life, and I think if you’re here, you agree. So, here’re some concrete actions to take. Don’t just do, do more.

Web 3.0 and system-generated content

20 June 2023 by Clark Leave a Comment

Not quite 15 years ago, I proposed that Web 3.0 would be system-generated content. There was talk about the semantic web, where we started tagging things, even auto-tagging, and then operating on chunks by rules connecting tags, not hard wiring. I think, however, that we’ve reached a new interpretation of Web 3.0 and system-generated content.

Back then, I postulated that Web 1.0 was producer-generated content. That is, the only folks who could put up content had all the skills. So, teams (or the rare individual) who could manage the writing, the technical specification, and the technical implementation. They had to put prose into html and then host it on the web with all the server requirements. These were the ones who controlled what was out there. Pages were static.

Then, CGIs came along, and folks could maintain state. This enabled some companies to make tools that could handle the backend, and so individuals could create. There were forms where you could type in content, and the system could handle posting it to the web (e.g. this blog!). So, most anyone could be a web creator. Social media emerged (with all the associated good and bad). This was Web 2.0, user-generated content.

I saw the next step as system-generated content. Here, I meant small chunks of (human-generated) content linked together on the fly by rules. This is, indeed, what we see in many sites. For instance, when you see recommendations, they’re based upon your actions and statistical inferences from a database of previous action. Rules pull up content descriptions by tags and present them together

There is another interpretation of Web 3.0, which is where systems are disaggregated. So, your content isn’t hosted in one place, but is distributed (c.f. Mastodon or blockchain. Here, the content and content creation are not under the control of one provider. This disaggregation undermines unified control, really a political issue with a technical solution.

However, we now see a new form of system-generated content. I’ll be clear, this isn’t what I foresaw (though, post-hoc, it could be inferred). That is, generative AI is taking semantics to a new level. It’s generating content based upon previous content. That’s different than what I meant, but it is an extension. It has positives and negatives, as did the previous approaches.

Ethics, ultimately, plays a role in how these play out. As they say, Powerpoint doesn’t kill people, bad design does. So, too, with these technologies. While I laud exploration, I also champion keeping experimentation in check. That is, nicely sandboxing such experimentation until we understand it and can have appropriate safe-guards in place. As it is, we don’t yet understand the copyright implications, for one. I note that this blog was contributing to Google C4 (according to a tool I can no longer find), for instance. Also, everyone using ChatGPT 3 has to assume that their queries are data.

I think we’re seeing system-generated content in a very new way. It’s exciting in terms of work automation, and scary in terms of the trustworthiness of the output. I’m erring on the side of not using such tools, for now. I’m fortunate that I work in a place of people paying me for my expertise. Thus, I will continue to rely on my own interpretation of what others say, not on an aggregation tool. Of course, people could generate stuff and say it’s from me; that’s Web 3.0 and system-generated content. Do be careful out there!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok