Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Reading Research?

14 September 2021 by Clark Leave a Comment

I was honored to have a colleague laud my Myths book (she was kind enough to also promote the newer learning science book), but it was something she said that I found intriguing. She suggested that one of the things in it includes “discussing how to read research”. And it occurs to me that it’s worth unpacking the situation a wee bit more. So here’s a discussion about how we (properly) develop learning science that informs us in reading research.

Caveat: I  haven’t been an active researcher for decades,  serving instead to interpret and apply the  research, but it’s easier to say ‘we’ than “scientists”, etc.  

Generally, theory drives research. You’ve created an explanation that accounts for observed phenomena better than previous approaches. What you do then is extend it to other predictions, and test them.  Occasionally, we do purely exploratory studies just to see what emerges, but mostly we generate hypotheses and test them.

We do this with some rigor. We try to ensure that the method we devise removes confounding variables, and then we use statistical analysis to remove the effects of other factors. For instance, I created a convoluted balancing approach to remove order effects in my Ph.D. research. (So complicated that I had to analyze a factor or two first, to ensure it wasn’t a factor, so I could remove it from the resulting analysis!). We also try to select relevant subjects, design uncontaminated materials, and carefully control our analysis. Understanding the ways in which we do this requires an ability to know about experiment design, which isn’t common knowledge.

Moreover, we then need to share this with our colleagues so that they can review what we’ve done. We need to do it in unambiguous language, using the specific vocabulary of our field. And we need to make it scrutable. Thus, we publish in peer-reviewed journals which mean others have looked at our work and deemed it acceptable. However, the language is deliberately passive, unemotional, and precise, as well as focused on a very narrow topic. Thus, it’s not a lot of fun to read unless you  really care about the topic!

There are problems with this. Increasingly, we’re finding that trying to isolate independent variables doesn’t reflect the inherent interactions. Our brains actually have a lot of complexity that hinder simple explanations. We’ve also found that it’s difficult to get representative subjects, when what’s easy to get are higher education students in the developed world. There are also politics involved, sad to say, so that it can be hard for new ideas to emerge if they challenge the entrenched views. Yet, it’s still the best approach we have. The scientific method has led to more advances in understanding than anything else!

There are things to worry about as a consumer of science. For one, there are people who fake results. They’re few, of course. There’s also research that’s kept proprietary, for financial reasons. Or is commissioned. As soon as there’s money involved, there’s the opportunity for corruption (think: tobacco, and sugar). Companies may have something that they tout as valid, but the research base isn’t publically available. Caveat emptor!

Thus, being able to successfully read research isn’t for everyone. You need to be able to comprehend the studies, and know when to be wary. The easy thing to do is to look for translations, and translators, who have demonstrated a trustworthy ability to help sort out the wheat from the chaff. They exist.

I hope this illustrates what reading research requires. You can take some preliminary steps: give it the ‘sniff’ test, see if it applies to you, and see who’s telling you this (and if anyone else is agreeing or saying to the contrary) and what their stake in the game is. If these steps don’t answer a question, however, maybe you want to look for good guidance. Make sense?

 

Coping with Change: A Book Review of Flux by April Rinne

9 September 2021 by Clark 1 Comment

How do we cope with change? There’s a myth that we resist change, but Peter de Jaeger busted that in a talk I heard where he pointed out that we make changes all the time. We get married, take a different job, have kids, all of which are changes. The difference is that these are changes we choose! However, in this era of increasing change, we’re likely going to face more and more changes we didn’t expect. Can we improve our ability for coping with change? Yes, says April Rinne in her book  Flux: 8 Superpowers for Thriving in Constant Change.

And  here’s a caveat: I am part of a  group she put together to talk about Flux while writing the book. I’m in the acknowledgements.

April, faced with a heavy unchosen change in her teens, carried that with her. It’s driven her interest in change and how we can learn to cope.  Given that we’re in an era of increasing change, she recognized that we would benefit from having some approaches to improve our reslience.  She looked at a wide variety of inputs, and has distilled her learnings into 8 mental frameworks that assist.

The underlying focus is on a flux  mindset, that is, a stance that change is coming and to be accepting, not resisting. The eight different ways of looking at the world are deliberately provocative, but also apt:

  • Run Slower
  • See What’s Invisible
  • Get Lost
  • Start with Trust
  • Know Your ‘Enough’
  • Create Your Portfolio Career
  • Be All the More Human
  • Let Go of the Future

Each gets a chapter, with illustrations of the challenge, and practical ways to enact. You may find, like I did, that some are familiar, others are more challenging. Each comes from either or both of ancient wisdom  and practical experience. The ones that were new I find to be all the more interesting. And useful!

That’s the real key. It’s very much aligned with what we know about how our brains work (a big issue with me, as this audience has probably learned ;). Some areas I feel like I’ve a handle on (e.g. run slower), and others are things are more challenging (e.g. see what’s invisible). There are bound to be areas of work for you. The upside of that work, however, is likely to be a better ability to ‘be’.

This is a book that you’ll want your loved ones to read, because what it provides aligns with a view of the world as it could and should be. It’s a guide for coping with change that addresses not only individuals, but organizations and society as a whole.  Highly recommended.

Iterating and evaluating

7 September 2021 by Clark Leave a Comment

Design cycleI’ve argued before about the need for evaluation in our work. This occurs summatively, where we’re looking beyond smile sheets to actually determine the impact of our efforts. However, it also should work formatively, where we’re seeing if we’re getting closer. Yet there are some ways in which we go off track. So I want to talk about iterating and evaluating our learning initiatives.

Let’s start by talking about our design processes. The 800 lb gorilla of ADDIE has shifted from a water flow model to a more iterative approach. Yet it still brings baggage. Of late, more agile and iterative approaches have emerged, not least Michael Allen’s SAM and Megan Torrance’s LLAMA. Agile approaches, where we’re exploring, make more sense when designing for people, with their inherent complexity.

Agile approaches work on the basis of creating, basically, Minimum Viable Products, and then iterating.  We evaluate each iteration. That is, we check to see what need to be improved, and what is good enough. However,  when are we done?

In my workshops, when talking about iteration, I like to ask the audience this question. Frequently, the answer is “when we run out of time and money”. That’s an understandable answer, but I maintain it’s the  wrong answer.

If we iterate until we run out of time and money, we don’t know that we’ve actually met our goals. As I explained about social media metrics, but applies here too, you  should be iterating until you achieve the metrics you’ve set. That means you know what you’re trying to do!

Which requires, of course, that you set metrics about what your solution should achieve. That could include usability and engagement (which come before and after, respectively), but most critically ‘impact’. Is this learning initiative solving the problem we’re designing it to achieve?  Which also means you need to have a discussion of why you’re building it, and how you know it’s working.

Of course, if you’re running out of time and money faster than you’re getting close to your goal, you have to decide whether to relax your standards, or apply for more resources, or abandon your work, or…but at least you’re doing so consciously. Yet this is still better than heuristically determining that three iterations is arbitrarily appropriate, for example.

I do recognize that this isn’t our current situation, and changing it isn’t easy. We’re still asked to make slide decks look good, or create a course on X, etc. Ultimately, however, our professionalism will ask us to do better. Be ready. Eventually, your CFO should care about the return on your expenditures, and it’ll be nice to have a real answer. So, iterating and evaluating  should  be your long term approach. Right?

Making it Meaningful

31 August 2021 by Clark 1 Comment

I volunteer for our local Community Emergency Response Team (CERT; and have learned lots of worthwhile things). On a call, our local organizer mentioned that she was leading a section of the train-the-trainers upcoming event, and was dreading trying to make it interesting. Of course I opened my big yap and said that’s something I’m focusing on, and offered to help. She took me up on it, and it was a nice case study in making it meaningful.

Now, I have a claim that you can’t give me a topic that I can’t create a game for. I’m now modifying that to ‘you can’t give me a topic I can’t make meaningful’.  She’d mentioned her topic was emergency preparedness, and while she thought it was a dull topic, I was convinced we could do it. I mentioned that the key was making it visceral.

I had personal experience; last summer our neighbor was spreading the rumor that we were going to have to evacuate owing to a fire over the ridge. (Turns out, my neighbor was wrong.) I started running around gathering sleeping bags, coats, dog crate, etc. Clearly, I was thinking about shelter. When I texted m’lady, she asked about passports, birth certificates, etc. Doh!

However, even without that personal example, there’s a clear hook. When I mentioned that, she mentioned that when you’re in a panic, your brain shuts down some and it’s really critical to be prepared. However, she mentioned that someone else was taking that bit, and her real topic was different types of disasters. Yet my example had already got her thinking, and she started talking about different people being familiar with an earthquake (here in California).

I thought of how when talking with scattered colleagues, they disclaim about how earthquakes are scary, and I remind them that  every place has its hazards. In the midwest it could be tornados or floods. On the east coast it’s hurricanes. Etc. The point being that everyone has some experience. Tapping into that, talking about consequences, is a great hook.

That’s the point, really. To get people willing to invest in learning, you have to help people see that they  do need it. (Also, that they don’t know it now,  and that this experience will change that.). You need to be engaged in making it meaningful!

Again, in my mind learning experience design (LXD) is about the elegant integration of learning science with engagement. You need to understand both. I’ve got a book and a workshop on learning science, and I’ve a workshop at DevLearn on the engagement side. I’ve also got a forthcoming book and an online workshop coming for more on engagement. Stay tuned!

More lessons from bad design

24 August 2021 by Clark 2 Comments

I probably seem like a crank, given the way I take things apart. Yet, I maintain there’s a reason beyond “get off my lawn!” I point out flaws not to complain, but instead to point to how to do it better. (At least, that’s my story and I’m sticking to it. ;) Here’s another example, providing more lessons from bad design.

In this case, I’ll be attending a conference and the providers have developed an application to support attendees. In general, I look forward to these applications. They provide ways to see who’s attending, and peruse sessions to set your calendar. There are also ways to connect to people. However, two major flaws undermine this particular instance.

The first issue is speed. This application is  slow! I timed it; 4 seconds to open the list of speakers or attendees. Similarly, I clicked on a letter to jump through the list of attendees. The amount of time it takes varied from 4 to 8 seconds. Jumping to the program took 6 seconds.

While that may seem short, compare that to most response times in apps. You essentially can’t time them, they’re so fast. More than a second is an era in mobile responsiveness. I suspect that this app is written as a ‘wrapped’ website, not a dedicated app. Which works sometimes, but not when the database is too big to be responsive. Or it could just be bad coding. Regardless, this is  basically unusable. So test the responsiveness before it’s distributed to make sure it’s acceptable. (And then reengineer it when it isn’t.)

That alone would be sufficient to discount this app, but there’s a second problem. Presumably for revenue reasons, there are ads that scroll across the top. Which might make sense to keep the costs of the app down, but there’s a fundamental problem with our visual architecture.

Motion in the periphery of our vision is distracting. That was evolutionarily adaptive, allowing us to detect threats from places that we weren’t focusing on. Yet, when it’s not a threat, and we  are trying to focus on something, it interferes. We learned about this in the days of web pages with animated gifs: you couldn’t process what you were there to consume!

In this app, the scrolling of the ads makes it more difficult to read the schedule, attendee lists, and other information. Thus, the whole purpose of the application is undermined. You could have static ads that are randomly attached to the pages you click on. The audience is likely to go to several pages, so all the ads will get seen. Having them move, however, to ensure that you see them all undermines the whole purpose of the app.

Oddly enough, there are other usability problems here. On the schedule, there’s a quick jump to times on a particular day. Though it stops at 2PM!?!? (The conference extends beyond that; my session’s at 4PM.) You’d think you could swipe to see later times on that ‘jump’ menu, but that doesn’t work. I can’t go farther, because the usability makes it too painful; we may miss more lessons from bad design.

Our cognitive architecture is powerful, but has limitations. Designing to work in alignment with our brains is a clear win; and this holds true for designing for learning as well as performance support. Heck, I’ve written a whole book  about how our minds work, just to support our ability to design better learning! Conflicting with our mental mechanisms is just bad design. My goal is that with more lessons in bad design, we can learn to do better. Here’s to good design!

Caveat Malarkey

17 August 2021 by Clark 1 Comment

After continuing to take down marketing blather, it’s time for a plea. Caveat Malarkey!

And, as always, the prose.


If you‘ve been paying attention, you will have seen that a number of my blog posts take down a variety of articles that are rife with malarkey. A lot of them come from connections or pointers on LinkedIn. (If you want to live in infamy, feel free to point me to your posts. ;) It‘s time to address what I‘m seeing, from two points of view. One is my advice to vendors in the L&D space. The other is advice to you who are consumers of education & technology products. The underlying theme is Caveat Malarkey!

What I‘m talking about is the large number of posts that do one of several things. First, they use myths to promote products. These are things like the attention span of a goldfish, learning styles, generations/digital natives, etc. Second, they are unclear on concepts. They toss around bizbuzz without being clear about what the terms mean, and more importantly what it takes to make it work and not! Of course, there are ones that accomplish both.  

So let‘s start with myths. Heck, I wrote a book about them, just because they won‘t go away!   For instance, while we know that learners differ, we can‘t (and shouldn‘t) address our learning to match styles. There‘s no evidence that adapting to styles helps. Worse, there is, as yet, no meaningful way to reliably characterize learners according to styles. Similarly, the claim that our attention span has dropped doesn‘t stand up to biological nor empirical scrutiny. We don‘t evolve that fast, and there‘s plenty of counter-evidence. The claim comes from a misinterpretation of an essentially irrelevant study. The notion that we can characterize people by the ‘generation‘ they‘re born in, or that people who grew up with digital phones are somehow ‘natives‘ are also both found lacking when looked at closely. There were 13 more myths in my book that can also be seen.  

Then there‘s conceptual clarity. Again, my most recent book is on learning science, trying to provide the foundation for clear understanding. Thus, when we hear terms like microlearning or workflow learning or whatever else will emerge, tread carefully! There are some powerful ideas on tap, but people who don‘t bother to unpack the terms and detail how they differ in design and use shouldn‘t be trusted.

My message is twofold. For one, as consumers, watch out for these approaches. If someone‘s being glib, be wary! First, learn about the concepts and the myths, and then dig in. If there‘s a claim, take several steps. First, give it the ‘sniff test‘. If it doesn‘t make conceptual sense, and/or isn‘t relevant to you, back away. Second, track it back. Who‘s making this claim, and what‘s their vested interest? Is anyone independently saying the same thing? Importantly, is there anyone saying to the contrary, and what‘s their interest? Eventually, you might go back to the original research, but if you haven‘t been trained, I encourage you to look to the reputable purveyors of evidence-based perspectives.  

To the vendors, please help. We need to raise our industry to a professional level. Get someone to write your articles who knows what they‘re talking about. Don‘t let social media interns (let alone the “I‘ll write articles for you” cold-mailers) write your materials. Find someone who understands learning. More importantly, get someone who understands learning to actually guide your products and/or service design, and then you can tout scrutable opportunities.

In the long term, we can lift our industry to an evidence-based, professional standard. In the short term, we need to focus on questionable claims, and shoot for real value. Caveat malarkey!  

More Marketing Malarkey

10 August 2021 by Clark 2 Comments

As has become all too common, someone decided to point me to some posts for their organization. Apparently, interest was sparked by a previous post of mine where I’d complained about  microlearning. While this one  does a (slightly) better job talking about  microlearning, it is riddled with other problems. So here’s yet another post about  more marketing malarkey.

First, I don’t hate microlearning; there are legitimate reasons to keep content small. It can get rid of the bloat that comes from contentitis, for one. There are solid reasons to err on the side of performance support as well. Most importantly, perhaps, is also the benefit of spacing learning to increase the likelihood of it being available. The thing that concerns me is that all these things are different, and take different design approaches.

Others have gone beyond just the two types I mention. One of the posts  cited a colleague’s more nuanced presentation about small content, pointing out four different ways to use microlearning (though interestingly,  five were cited in the referenced presentation). My problem, in this case, wasn’t the push for microlearning (there were some meaningful distinctions, though no actual mention how they require different design). Instead, it was the presence of myths.

One of the two posts opened with this statement: “The appetite of our employees is not the same therefore, we must not provide them the same bland food (for thought).” This seems a bit of a mashup. Our employees aren’t the same, so they need different things? That’s personalization, no? However, the conversation goes on to say: “It‘s time to put together an appetizing platter and create learning opportunities that are useful and valuable.”  Which seems to argue for engagement. Thus, it seems like it’s instead arguing that people need more engaging content. Yes, that’s true too. But what’s that got to do with our employees not having the same appetite? It  seems to be swinging towards the digital native myth, that employees now need more engaging things.

This is bolstered by a later quote: “When training becomes overwhelming and creates stress, a bite-sized approach will encourage learning.” If training becomes overwhelming and stressful, it  does suggest a redesign. However, my inclination would be to suggest that ramping up the WIIFM and engagement are the solution. A bite-sized approach, by itself, isn’t a solution to engagement. Small wrong or dull content isn’t a solution for dull or wrong content.

This gets worse in the other post. There were two things wrong here. The first one is pretty blatant:

There are numerous resources that suggest our attention spans are shrinking. Some might even claim we now have an average attention span of only 8 seconds, which equals that of a goldfish.

There are, of course, no such resources pointed to. Also, the resources that proposed this have been debunked. This is actually the ‘cover story’ myth of my recent book on myths! In it, I point out that the myth about attention span came from a misinterpreted study, and that our cognitive architecture doesn’t change that fast. (With citations.) Using this ‘mythtake’ to justify microlearning is just wrong. We’re segueing into tawdry marketing malarkey here.

This isn’t the only problem with this post, however. A second one emerges when there’s an (unjustified) claim that learning should have 3E’s: Entertaining, Enlightening, and Engaging. I do agree with Engaging (per the title of my first book), however, there’s a problem with it. And the other ones. So, for Entertaining, this is the followup: “advocates the concept of learning through a sequence of smaller, focused modules.” Why is smaller inherently more entertaining? Also, in general, learning doesn’t work as well when it’s just ‘fun’, unless it’s “hard fun”.

Enlightening isn’t any better. I do believe learning should be enlightening, although particularly for organizational learning it should be transformative in terms of enhancing an individual’s ability to  perform. Just being enlightened doesn’t guarantee that. The followup says: “Repetition, practice, and reinforcement can increase knowledge.” Er, yes, but that’s just good design. There’s nothing unique to microlearning about that.

Most importantly, the definition for Engaging is “A program journey can be spaced enough that combats forgetting curve.” That is spacing! Which isn’t a bad thing (see above), but not your typical interpretation of engaging. This is really confused!

Further, I didn’t even need to fully parse these two posts. Even on a superficial examination, they fail the ‘sniff test’. In general, you should be avoiding folks that toss around this sort of fluffy biz buzz, but even more so when they totally confound a reasonable interpretation of these concepts. This is just more marketing malarkey. Caveat emptor.

(Vendors, please please please stop with the under-informed marketing, and present helpful posts. Our industry is already suffering from too many myths. There’s possibly a short-term benefit, however the trend seems to be that people are paying more attention to learning science. Thus, in the long run I reckon it undermines your credibility. While taking them down is fun and hopefully educational, I’d rather be writing about new opportunities, not remedying the old.  If you don’t have enough learning science expertise to do so, I can help: books, workshops, and/or writing and editing services.)

 

Concept Maps and Learning

3 August 2021 by Clark 1 Comment

Once again, someone notified me of something they wanted me to look at. In this case, a suite of concept maps, with a claim that this could be the future of education. And while I’m a fan of concept maps, I was suspicious of the claim, So, while I’ve written on mindmaps before, it’s time to dig into concept maps and learning.

To start, the main separation between mindmaps and concept maps is labels. Specifically, concept maps have labels that indicate the meaning of  connections between concepts. At least, that’s my distinction. So while I’ve done (a lot of) mindmaps of keynotes, they’re mostly of use to those who also saw the same presentation. Otherwise, the terms and connections don’t necessarily make sense. (Which doesn’t mean a suite of connections can’t be valuable, c.f. Jerry’s Brain, where Jerry Michalski has been tracking his explorations for over two decades!) However, a concept map does a better job of indicating the total knowledge representation.

I know a wee bit about this, because while writing up my dissertation, I had a part-time job working with Professor  Kathy Fisher and SemNet. Kathy Fisher is a biologist and teacher who worked with Joe Novak (who can be considered the originator of concept mapping). SemNet is a Macintosh concept mapping tool (Semantic Network) that Kathy created and used in teaching biology. It allows students to represent their understanding, which instructors can use to diagnose misconceptions.

I also later volunteered for a while with the K-Web project. This was a project with James Burke (of Connections fame) creating maps of the interesting historical linkages his show and books documented. Here again, navigating linkages can be used for educational purposes.

With this background, I looked at this project. The underlying notion is to create a comprehensive suite of multimedia mindmaps of history and the humanities. This, to me, isn’t a bad thing! It provides a navigable knowledge resource that could be a valuable adjunct to teaching. Students can be given tasks to find the relationships between two things, or asked to extend the concept maps, or… Several things, however, are curious at least.

The project claims to be a key to the future of global education. However, as an educational innovation, the intended pedagogical design is worrisome. The approach claims that “They have complete freedom to focus on and develop whichever interests capture their fancy.” and “…the class is exposed to a large range of topics that together provide a comprehensive and lively view of the subject…”  This is problematic for two reasons. First, there appears to be no guarantee that this indeed will provide comprehensive coverage. It’s possible, but not likely.

As a personal example, when I was in high school, our school district decided that the American Civil War would be taught as modules. Teachers chose to offer whatever facets they wanted, and students could take any two modules they wanted. Let me assure you that my knowledge of the Civil War did not include a systematic view of the causes, occurrences, and outcomes, even in ideologically distorted versions. Anything I now know about the Civil War comes from my own curiosity.

Even with the social sharing, a valuable component, there appears to be no guidance to ensure that all topics are covered. Fun, yes. Curricularly thorough, no.

Second, presenting on content doesn’t necessarily mean you’ve truly comprehended it. As my late friend, historian Joseph Cotter, once told me, history isn’t about learning facts, it’s about learning to think like a historian. You may need the cultural literacy first, but then you need to be able to use those elements to make comparisons, criticisms, and more.  Students should be able to  think with these facts.

Another concerning issue in the presentation about this initiative is this claim: “reading long passages of text no longer works very well for the present generation of learners. More than ever, learners are visual learner [sic].” This confounds two myths, the digital native myth with the learning styles myth. Both have been investigated and found to be lacking in empirical support. No one likes to read long passages of text without some intrinsic interest (but we can do that).

In short, while I laud the collection, the surrounding discussion is flawed. Once again, there’s a lack of awareness of learning science being applied. While that’s understandable, it’s not sufficient.  My $0.05.

A new common tragedy?

27 July 2021 by Clark 2 Comments

Recently, my kids (heh, in their 20s) let me know that they don’t use Yelp. That actually surprised and puzzled me. Not specifically because of Yelp, but instead because there’s a societal benefit that’s possibly being undermined or abandoned. I may be naive, but I think that we may be missing an opportunity. So here’s my exploration of a potential new common tragedy.

The idea of the commons is simple, though also somewhat controversial. There’s a shared resource. In the traditional economic model, it’s limited. Thus, everyone taking advantage of it ends up ruining the resource (the infamous ‘tragedy of the commons’). In this case, however, the potential tragedy is different.

Information, as has been said, wants to be free. With the internet, it’s almost that way, and there are almost zero limits on the information (for better or worse). We can take advantage of the information for little more than the cost of a browser-capable device and an internet connection (which can come just with a cup of coffee ;). We can also contribute. That’s social media.

That’s been the premise of some of the more powerful ideas of the internet. If we share information, we can all benefit. Thus, we should offer up information and in return get the benefit. We don’t have to offer it, but if we do we all benefit. It’s cooperation. Social media has led to many great wins. My colleague and friend, Paul Signorelli, has a new book just on that! In his Change the World Using Social Media, he says “social media platforms can…produce positive change”. Of course, there are also problematic uses. The ways in which certain platforms (*cough* Facebook *cough*) have been used to spread misinformation is a caution. Yet, I believe these are problems that are solvable.

Now, Yelp is a service where people can share reviews of almost any service: repairs, meals, … And it’s just an example, there are other ways people share information, such as Wikipedia, NextDoor, etc. Yelp got off to a somewhat idiosyncratic start, owing to claims of favoritism. However, it’s now relatively reliable, I believe. (Am I wrong?)

The possibility is that if everyone fairly uses such as service, that everyone benefits. You do have to offer your own input, but you gain from others. Of course, the service itself must be principled, including a way to self-repair any problems. There can be more than one, though one tends to end up being dominant.

What’s problematic, to me, is why people  wouldn’t participate. For example, my kids. For one, there’s a belief that people only write negative reviews. Yet we do see businesses with ratings from 3 to 5, so clearly there are positive reviews (I’ve done both).  Yelp has helped me find good places to eat and get valuable services. I’ve likewise shared my experiences, to help others.

However, what may not be solvable is getting people on board with the idea of the benefit. If we turn away from this opportunity, we end up losing 0ut. Yes, I can be an idealist, but I’d hope that we can see the ultimate benefit that can be obtained. Across many platforms, ideally. I’d like to avoid a new common tragedy. I’m also willing to be wrong, so I welcome feedback.

 

My ‘Man on the Moon’ Project

20 July 2021 by Clark 8 Comments

There have been a variety of proposals for the next ‘man on the moon’ project since JFK first inspired us. This includes going to Mars, infrastructure revitalization, and more. And I’m sympathetic to them. I’d like us to commit to manufacturing and installing solar panels over all parking lots, both to stimulate jobs and the economy, and transform our energy infrastructure, for instance. However, with my focus on learning and technology, there’s another ‘man on the moon’ project I’d like to see.

I’d like to see an entire K12 curriculum online (in English, but open, so that anyone can translate it). However, there are nuances here. I’m not oblivious to the fact that there are folks pushing in this direction. I don’t know them all, but I certainly have some reservations. So let me document three important criteria that I think are critical to make this work (queue my claim “only two things wrong with education in this country, the curriculum and the pedagogy, other than that it’s fine”).

First, as presaged, it can’t be the existing curriculum.  Common Core isn’t evil, but it’s still focused on a set of elements that are out of touch. As an example, I’ll channel Roger Schank on the quadratic equation: everyone’s learned (and forgotten) it, almost no one actually uses it. Why? Making every kid learn it is just silly. Our curriculum is a holdover from what was stipulated at the founding of this country. Let’s get a curriculum that’s looking forward, not back. Let’s include the ability to balance a bankbook, to project manage, to critically evaluate claims, to communicate visually, and the like.

Second, as suggested, it can’t be the existing pedagogy. Lecture and test don’t lead to retaining and transferring the ability to  do. Instead, learning science tells us that we need to be given challenging problems, and resources and guidance to solve them. Quite simply, we need to practice as we want to be able to perform. Instruction is designed action and  guided reflection.  Ideally, we’d layer on learning on top of learner interests. Which leads to the third component.

We need to develop teachers who can facilitate learning in this new pedagogy. We can’t assume teachers can do this. There are many dedicated teachers, but the system is aligned against effective outcomes. (Just look at the lack of success of educational reform initiatives.) David Preston, with his Open Source Learning has a wonderful idea, but it takes a different sort of teacher. We also can’t assume learners sitting at computers. So, having a teacher support component along with every element is important.

Are there initiatives that are working on all this? I have yet to see any one that’s gotten it  all right.  The ones I’ve seen lack on one or another element. I’m happy to be wrong!

I also recognize that agreeing on all the elements, each of which is controversial, is problematic. (What’s the  right curricula? Direct instruction or constructivist? How do we value teachers in society?) We’d have major challenges in assembling folks to address any of these, let alone all and achieving convergence.

However, think of the upside. What could we accomplish if we had an effective education system preparing youth for the success of our future? What  is the best investment in our future?  I realize it’s a big dream; and I’m not in a position to make it happen. Yet I did want to drop the spark, and see if it fire any imaginations. I’m happy to help. So, this is my ‘man on the moon’ project; what am I missing?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok