Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: Reading Research

Reading Research?

14 September 2021 by Clark Leave a Comment

I was honored to have a colleague laud my Myths book (she was kind enough to also promote the newer learning science book), but it was something she said that I found intriguing. She suggested that one of the things in it includes “discussing how to read research”. And it occurs to me that it’s worth unpacking the situation a wee bit more. So here’s a discussion about how we (properly) develop learning science that informs us in reading research.

Caveat: I  haven’t been an active researcher for decades,  serving instead to interpret and apply the  research, but it’s easier to say ‘we’ than “scientists”, etc.  

Generally, theory drives research. You’ve created an explanation that accounts for observed phenomena better than previous approaches. What you do then is extend it to other predictions, and test them.  Occasionally, we do purely exploratory studies just to see what emerges, but mostly we generate hypotheses and test them.

We do this with some rigor. We try to ensure that the method we devise removes confounding variables, and then we use statistical analysis to remove the effects of other factors. For instance, I created a convoluted balancing approach to remove order effects in my Ph.D. research. (So complicated that I had to analyze a factor or two first, to ensure it wasn’t a factor, so I could remove it from the resulting analysis!). We also try to select relevant subjects, design uncontaminated materials, and carefully control our analysis. Understanding the ways in which we do this requires an ability to know about experiment design, which isn’t common knowledge.

Moreover, we then need to share this with our colleagues so that they can review what we’ve done. We need to do it in unambiguous language, using the specific vocabulary of our field. And we need to make it scrutable. Thus, we publish in peer-reviewed journals which mean others have looked at our work and deemed it acceptable. However, the language is deliberately passive, unemotional, and precise, as well as focused on a very narrow topic. Thus, it’s not a lot of fun to read unless you  really care about the topic!

There are problems with this. Increasingly, we’re finding that trying to isolate independent variables doesn’t reflect the inherent interactions. Our brains actually have a lot of complexity that hinder simple explanations. We’ve also found that it’s difficult to get representative subjects, when what’s easy to get are higher education students in the developed world. There are also politics involved, sad to say, so that it can be hard for new ideas to emerge if they challenge the entrenched views. Yet, it’s still the best approach we have. The scientific method has led to more advances in understanding than anything else!

There are things to worry about as a consumer of science. For one, there are people who fake results. They’re few, of course. There’s also research that’s kept proprietary, for financial reasons. Or is commissioned. As soon as there’s money involved, there’s the opportunity for corruption (think: tobacco, and sugar). Companies may have something that they tout as valid, but the research base isn’t publically available. Caveat emptor!

Thus, being able to successfully read research isn’t for everyone. You need to be able to comprehend the studies, and know when to be wary. The easy thing to do is to look for translations, and translators, who have demonstrated a trustworthy ability to help sort out the wheat from the chaff. They exist.

I hope this illustrates what reading research requires. You can take some preliminary steps: give it the ‘sniff’ test, see if it applies to you, and see who’s telling you this (and if anyone else is agreeing or saying to the contrary) and what their stake in the game is. If these steps don’t answer a question, however, maybe you want to look for good guidance. Make sense?

 

Bad research

17 October 2023 by Clark 1 Comment

How do you know what’s dubious research? There are lots of signals, more than I can cover in one post. However, a recent discovery serves as an example to illustrate some useful signals. I was trying to recall a paper I recently read, which suggested that reading is better than video for comprehending issues. Whether that’s true or not isn’t the issue. What is the issue is that in my search, I came across an article that really violated a number of principles. As I am wont to do, let’s briefly talk about bad research.

The title of the article (paraphrasing) was “Research confirms that video is superior to text”. Sure, that could be the case! (Actually the results say, not surprisingly, that one media’s better for some things, and another’s better at other; BTW, one of our great translators of research to practice, Patti Shank, has a series of articles on video that’s worth paying attention to.) Still, this article claimed to have a definitive statement about at least one study. However, when I looked at it, there were several problems.

First, the study was a survey asking instructors what they thought of video. That’s not the same as an experimental study! A good study would choose some appropriate content, and then have equivalent versions in text and video, and then have a comprehension test. (BTW, these experiments have been done.) Asking opinions, even of experts, isn’t quite as good. And these weren’t experts, they were just a collection of instructors. They might have valid opinions, but their expertise wasn’t a basis for deciding.

Worse, the folks conducting the study had. a. video. platform.  Sorry, that’s not an unbiased observer. They have a vested interest in the outcome. What we want is an impartial evaluation. This simply couldn’t be it. Not least, the author was the CEO of the platform.

It got worse. There was also a citation of the unjustified claim that images are processed 60K times better than text, yet the source of that claim hasn’t been found! They also cited learning styles! Citing unjustified claims isn’t a good practice in sound research. (For instance, when reviewing articles, I used to recommend rejecting them if they talked learning styles.) Yes, it wasn’t a research article on it’s own, but…I think misleading folks isn’t justified in any article (unless it’s illustrative and you then correct the situation).

Look, you can find valuable insights in lots of unexpected places, and in lots of unexpected ways. (I talk about ‘business significance’ can be as useful as statistical significance.) However, an author with a vested interest, using an inappropriate method, to make claims that are supported by debunked data, isn’t it. Please, be careful out there!

New recommended readings

8 June 2021 by Clark Leave a Comment

My Near Book ShelfOf late, I‘ve been reading quite a lot, and I‘m finding some very interesting books. Not all have immediate take homes, but I want to introduce a few to you with some notes. Not all will be relevant, but all are interesting and even important. I‘ll also update my list of recommended readings. So here are my new recommended readings. (With Amazon Associates links: support your friendly neighborhood consultants.)

First, of course, I have to point out my own Learning Science for Instructional Designers. A self-serving pitch confounded with an overload of self-importance? Let me explain. I am perhaps overly confident that it does what it says, but others have said nice things. I really did design it to be the absolute minimum reading that you need to have a scrutable foundation for your choices. Whether it succeeds is an open question, so check out some of what others are saying. As to self-serving, unless you write an absolute mass best-seller, the money you make off books is trivial. In my experience, you make more money giving it away to potential clients as a better business card than you do on sales. The typically few hundred dollars I get a year for each book aren‘t going to solve my financial woes! Instead, it‘s just part of my campaign to improve our practices.

So, the first book I want to recommend is Annie Murphy Paul‘s The Extended Mind. She writes about new facets of cognition that open up a whole area for our understanding. Written by a journalist, it is compelling reading. Backed in science, it’s valuable as well. In the areas I know and have talked about, e.g. emergent and distributed cognition, she gets it right, which leads me to believe the rest is similarly spot on. (Also her previous track record; I mind-mapped her talk on learning myths at a Learning Solutions conference). Well-illustrated with examples and research, she covers embodied cognition, situated cognition, and socially distributed cognition, all important. Moreover, there‘re solid implications for the redesign of instruction. I‘ll be writing a full review later, but here‘s an initial recommendation on an important and interesting read.  

I‘ll also alert you to Tania Luna‘s and LeeAnn Renninger‘s Surprise. This is an interesting and fun book that instead of focusing on learning effectiveness, looks at the engagement side. As their subtitle suggests, it‘s about how to Embrace the Unpredictable and Engineer the Unexpected. While the first bit of that is useful personally, it‘s the latter that provides lots of guidance about how to take our learning from events to experiences. Using solid research on what makes experiences memorable (hint: surprise!) and illustrative anecdotes, they point out systematic steps that can be used to improve outcomes. It‘s going to affect my Make It Meaningful  work!

Then, without too many direct implications, but intrinsically interesting is Lisa Feldman Barrett‘s How Emotions Are Made. Recommended to me, this book is more for the cog sci groupie, but it does a couple of interesting things. First, it creates a more detailed yet still accessible explanation of the implications of Karl Friston‘s Free Energy Theory. Barrett talks about how those predictions are working constantly and at many levels in a way that provides some insights. Second, she then uses that framework to debunk the existing models of emotions. The experiments with people recognizing facial expressions of emotion get explained in a way that makes clear that emotions are not the fundamental elements we think they are. Instead, emotions social constructs! Which undermines, BTW, all the facial recognition of emotion work.

I also was pointed to Tim Harford‘s The Data Detective, and I do think it‘s a well done work about how to interpret statistical claims. It didn‘t grip me quite as viscerally as the afore-mentioned books, but I think that‘s because I (over-)trust my background in data and statistics. It is a really well done read about some simple but useful rules for how to be a more careful reviewer of statistical claims. While focused on parsing the broader picture of societal claims (and social media hype), it is relevant to evaluating learning science as well.  

I hope you find my new recommended readings of interest and value. Now, what are you recommending to me? (He says, with great trepidation. ;)

Theory or Research?

17 July 2019 by Clark Leave a Comment

There’s a lot of call for evidence-based methods (as mentioned yesterday): L&D, learning design, and more. And this is a good thing. But…do you want to be basing your steps on a particular empirical study, or the framework within which that study emerged? Let me make the case for one approach. My answer to theory or research is theory. Here’s why.

Most research experiments are done in the context of a theoretical framework. For instance, the work on worked examples comes from John Sweller’s Cognitive Load theory. Ann Brown & Ann-Marie Palincsar’s experiments on reading were framed within Reciprocal Teaching, etc. Theory generates experiments which refine theory.

The individual experiments illuminate aspects of the broader perspective. Researchers tend to run experiments driven by a theory. The theory leads to a hypothesis, and then that hypothesis is testable. There  are some exploratory studies done, but typically a theoretical explanation is generated to explain the results. That explanation is then subject to further testing.

Some theories are even meta-theories! Collins & Brown’s Cognitive Apprenticeship  (a favorite) is based upon integrating several different theories, including the Reciprocal Teaching, Alan Schoenfeld’s work on examples in math, and the work of Scardemalia & Bereiter on scaffolding writing. And, of course, most theories have to account for others’ results from other frameworks if they’re empirically sound.

The approach I discuss in things like my Learning Experience Design workshops is a synthesis of theories as well. It’s an eclectic mix including the above mentioned, Cognitive Flexibility, Elaboration, ARCS, and more. If I were in a research setting, I’d be conducting experiments on engagement (pushing beyond ARCS) to test my own theories of what makes experiences as engaging and effective. Which, not coincidentally, was the research I was doing when I  was  an academic (and led to  Engaging Learning). (As well as integration of systems for a ubiquitous coaching environment, which generates many related topics.)

While individual results, such as the benefits of relearning, are valuable and easy to point to, it’s the extended body of work on topics that provides for longevity and applicability. Any one study may or may not be directly applicable to your work, but the theoretical implications give you a basis to make decisions even in situations that don’t directly map. There’s the possibility to extend to far, but it’s better than having no guidance at all.

Having theories to hand that complement each other is a principled way to design individual solutions  and design processes. Similarly for strategic work as well (Revolutionize L&D) is a similar integration of diverse elements to make a coherent whole. Knowing, and mastering, the valid and useful theories is a good basis for making organizational learning decisions. And avoiding myths!  Being able to apply them, of course, is also critical ;).

So, while they’re complementary, in the choice between theory or research I’ll point to one having more utility. Here’s to theories and those who develop and advance them!

Deeper Learning Reading List

20 April 2016 by Clark 3 Comments

So, for my last post, I had the Revolution Reading List, and it occurred to me that I’ve been reading a bit about deeper learning design, too, so I thought I’d offer some pointers here too.

The starting point would be Julie Dirksen’s Design For How People Learn (already in it’s 2nd edition). It’s a very good interpretation of learning research applied to design, and very readable.

A new book that’s very good is Make It Stick, by Peter Brown, Henry Roediger III, and Mark McDaniel, the former being a writer who’s worked with two scientists to take learning research into 10 principles.

And let me mention two Ruth Clark books. One with Dick Mayer from UCSB, e-Learning and the Science of Instruction, that focuses on the use of media.  A second with Frank Nguyen and the wise John Sweller, Efficiency in Learning, focuses on cognitive load (which has many implications, including some overlap with the first).

Patti Schank has come out with a concise compilation of research called The Science of Learning that’s available to ATD members. Short and focused with her usual rigor.  If you’re not an ATD member, you can read her  blog posts that contributed (click ‘View All’).

Dorian Peters book on Interface Design for Learning also has some good learning principles as well as interface design guidance.  It’s not the same for learning as for doing.

Of course, a classic is a compilation of research by a blue-ribbon team lead by John Bransford,  How People Learn, (online or downloadable).  Voluminous, but pretty much state of the art.

Another classic is  the Cognitive Apprenticeship  model of Allen Collins & John Seely Brown. A holistic model abstracted across some seminal work, and quite readable.

The Science of Learning Center has an academic integration of research to instruction theory by Ken Koedinger, et al,  The Knowledge-Learning-Instruction Framework, that’s freely available as a PDF.

I’d be remiss if I don’t point out the Serious eLearning Manifesto, which has 22 research principles underneath the 8 values that differentiate serious elearning from typical versions.  If you buy in, please sign on!

And, of course, I can point you to my own series for Learnnovators on Deeper ID.

So there you go with some good material to get you going. We need to do better at elearning, treating it with the importance it deserves.  These don’t necessarily tell you how to redevelop your learning design processes, but you know who can help you with that.  What’s on your list?

Design Readings

31 May 2012 by Clark 4 Comments

Another book on design crossed my radar when  I was at a retreat and in the stack of one of the other guests was Julie Dirksen’s book Design for How People Learn  and  Susan Weinschenk’s  100 Things Every Designer Needs to Know About People.  This book provides a nice complement to Julie’s, focusing on straight facts about how we process the world.

Dr. Weinschenk’s book systematically goes through categories of important design considerations:

  • How People See
  • How People Read
  • How People Remember
  • How People Think
  • How People Focus Their Attention
  • What Motivates People
  • People Are Social Animals
  • How People Feel
  • People Make Mistakes
  • How People Decide

Under each category are important points, described, buttressed by research, and boiled down into useful guidelines. This includes much of the research I talk about when I discuss deeper Instructional Design, and more.  While it’s written for UI designers mostly, it’s extremely relevant to learning design as well.  And it’s easy reading and reference, illustrated and to-the-point.

There are some really definitive books that people who design for people need to have read or have to hand. This fits into the latter category as does Dirksen’s book, while  Don Norman’s books, e.g.  Design of Everyday Things  fit into the former.  Must knows and must haves.

Where’s quality?

1 July 2025 by Clark Leave a Comment

I get it, when you’ve a hammer, the whole world looks like a nail. Moreover, there’s money on the table, and it’d be a shame not to grab onto it. Still, there’s also integrity. And, frankly, I fear that we’re going down the wrong path. So I’ll rail again, by asking “where’s quality?”

So, a colleague recently provided a link to a report by a well-known analyst. In the report, they call for an AI revolution for L&D. And, yes, I do believe L&D needs a revolution, I wrote a whole book about it. However, I fear that the direction under advisement is focusing on the wrong thing. So here’s what the initial post summarized about the article:

* Despite significant investment, many companies are utilizing outdated learning models that do not deliver substantial business impact.

* Learning needs to be dynamic, personalized, and focused on enablement.

* Chief Learning Officers (CLOs) should re-establish themselves as leaders within the enterprise, focusing not just on learning but on employee enablement.

* Artificial intelligence (AI) offers the potential to speed up content creation, lower costs, and improve operational efficiency, which allows Learning and Development (L&D) to adopt a wider and more strategic role.

Do you see anything wrong with this? I actually agree  with the first point, and probably the third. However, I think we can make a strong case that the second is not the primary issue. And very clearly the fourth point identifies what’s wrong in the second, at least before the last phrase.

So, first, when we invoke learning, we should be very careful to do it right. There are claims that up to 90% of our investment in training is going to waste. However, it’s not because our learning designs aren’t ‘dynamic, personalized, and focused on enablement’, it’s because our learning isn’t designed according to what research says works. Now, our learning needs change as our abilities improve. We start knowing what we need and why. There’re also times when performance support can be more effective than courses. Courses can still be valid, if they’re done well.

That’s the point I continue to make: I maintain that we’ll save more money and have more impact if we focus on good learning design before we invest in fancy technology. That includes AI. We want meaningful practice (which I suggest is still a role for designers, as AI doesn’t understand context), not information dump. Knowledge <> ability to perform. What we need is practice of doing. At least for novices. But beyond that, only effective self-learners will be truly able to leverage information on their own to learn. Even social learning gets better when we understand learning.

So, learning needs to be evidence-informed, first. Then, and only then, can it be dynamic, personalized, etc. Even knowing when and how to use AI as performance support counts (a more valid role, tho’ there needs to be scrutiny of the advice somehow, as AIs can give bad advice). Sure, CLO’s do need to be leaders in the enterprise, but that comes from understanding cognition and learning, and then using those to better enable innovation as well as optimizing performance. Enablement’s fine as a premise, but it’s got to come from understanding. For instance, you can’t get employees contributing just because you put in AI, you need to create a learning culture. (Putting AI into a Miranda organization isn’t going to magically fix the problem.)

Let me be clear: my argument is not Gen AI bad vs Gen AI good. No, it’s learning science involved versus not. I am fine if we start using AI, Gen or otherwise,, but after we’ve made sure we’re doing the right things first. Let me pose a hypothetical: for $30K, would you rather have 3 courses versus 10? What if those 3 courses were designed to actually have an impact, versus 10 that are pretty and full of information, but won’t move a single meaningful needle the organization? Sure, I’ve made up the numbers, but the reality is that we’re talking about achieving real outcomes versus making folks feel good; I’ll suggest “it’s pretty and people like it” is no substitute for improving the outcome.

This makes the last line above more problematic: we don’t need to speed up content creation. Content dump <> learning. Lowering costs and improving efficiency is all good, but after you’ve ensured adequate effectiveness. And no one seems to be talking about that. That’s why I’m asking “where’s quality?” It’s not being discussed, because AI is the next shiny object: “there’s plenty of money to be made”. Anyone else sensing a bubble? And that’s without even considering IP ethics, environmental impact, security, and VC funding. The business model is still up in the air. Hence, my question. Your thoughts?

As an aside, there’s a quote in the paper that illustrates their lack of deep understanding: “As our attention spans shorten”. Ahem. While there’s a credible argument made by Gloria Marks, I still suggest it’s not a change in our cognitive architecture, but instead availability and familiarity. We can still disappear for hours into a novel, movie, or game. It’s a fallacious basis for an argument. 

Truth in advertising: I was tempted to title this “WTAH”, but…I decided that might be too incendiary ;). Hence, “Where’s quality?” Still, you can imagine my mood while reading and then writing this.

Why science?

15 April 2025 by Clark 1 Comment

I’ve written in praise of the cognitive and learning sciences. I, however, need to take a step back. It’s becoming increasingly clear to me, sadly, that there are attacks on science itself.  Yet, I have a strong belief that it matters. So let me briefly address the question of why science.

As background, I have been steeped in science. It was one of my favorite topics in school, and in college. My PhD is in the underpinnings of how we think. Though it’s been a long while since I was an active scientific researcher, I still apply what’s known. Moreover, I continue to track developments, so I can continue to do so. 

As a result, I’ve been a fan of the work of scientists in the cognitive and learning fields. I’ve not only had training in the methods, but I also continue to explore more broadly the methods and the applications. I also love the translators who take that research written in the original academese and turn it into practical advice. Heck, I’m co-director of a society about evidence-based practices. 

There has been some ‘confusion’ about the scientific process. “How can you trust it if it admits it’s been wrong?” Er, that’s what it’s about, continually creating explanations about the world. When we know more, we may need to change our explanations. We went from the sun circling the earth to the other way around, and we no longer (should) think the world is flat. If you don’t believe in the findings, how (and why) are you reading this? Technologies developed from scientific endeavor. 

To be fair, science has been used for ill as well as good. That’s about people’s ethics, not the outcomes. We have to be mindful of how we apply what we learn. That’s up to our values and morals, which science actually has a lot to say as well. For instance, I’ve made the case that research tells us we do better when we’re inclusive. That’s science telling us what values lead to the best outcomes. When we work with what we know about how we think, work, and learn, we improve the outcomes. 

The evidence says that science is better than any alternative. When we apply evidence-based practices, we get the best results. That’s a win. When we turn our backs on it, we lose. Lives can be negatively impacted or lost. That’s not a win. And for our orgs, ignoring science in marketing, operations, sales, etc doesn’t make sense. So, too, for learning and ‘human resources’ in general. And, that’s true for society and government as well. So let’s make sure we’re making decisions in ways that align with science. It may seem more expedient in the short-term to do otherwise, but the long-term results argue for us doing the right thing. When there’re conflicts between beliefs and the evidence, things go better when we adapt beliefs and go with the evidence. “Why science” is because it works better. 

Is “Workflow Learning” a myth?

24 September 2024 by Clark 5 Comments

There’s been a lot of talk, of late, about workflow learning. To be fair, Jay Cross was talking about learning in the flow of work way back in the late 1990s, but the idea has been recently suborned and become current. Yet, the question remains whether it’s real or a mislabeling (something I’m kind of  anal about, see microlearning). So, I think it’s worth unpacking the concept to see what’s there (and what may not be). Is workflow learning a myth?

To start, the notion is that it’s learning at the moment of need. Which sounds good. Yet, do we really need learning? The idea Jay pointed to in his book Informal Learning, was talking about Gloria Gery’s work on helping people in the moment. Which is good! But is it learning? Gloria was really talking about performance support, where we’re looking to overcome our cognitive limitations. In particular, memory, and putting the information into the world instead of in the head. Which isn’t learning! It’s valuable, and we don’t do it enough, but it’s not learning.

Why? Well, because learning requires action and reflection. The latter can just be thinking about the implications, or in Harold Jarche’s Personal Knowledge Mastery model, it’s about experimenting and representing. In formal learning, of course, it’s feedback. I’ve argued we could do that, by providing just a thin layer on top of our performance support. However, I’ve never seen same!  So,  you’re going to do, and then not learn. Okay, if it’s biologically primary (something we’re wired to learn, like speaking), you’re liable to pick it up over time, but if it’s biologically secondary (something we’ve created and aren’t tuned for, e.g. reading) I’d suggest it’s less likely. Again, performance is the goal. Though learning can be useful to support comprehending context and  making complex decisions, what we’re good at.

What is problematic is the notion of workflow and reflection in conjunction. Simply, if you’re reflecting, you’re by definition out of the workflow! You’re not performing, you’re stopping and thinking. Which is valuable, but not ‘flow’. Sure, I may be overly focused on workflow being in the ‘zone’, acting instead of thinking, but that, to me, is really the notion. Learning happens when you stop and contemplate and/or collaborate.

So, if you want to define workflow to include the reflection and thoughtful work, then there is such a thing. But I wonder if it’s more useful to separate out the reflection as things to value, facilitate, and develop. It’s not like we’re born with good reflection practices, or we wouldn’t need to do research on the value of concept mapping and sketch noting and how it’s better than highlighting. So being clear about the phases of work and how to do them best seems to me to be worthwhile.

Look, we should use performance support where we can. It’s typically cheaper and more effective than trying to put information into the head. We should also consider adding some learning content on top of performance support in times where people knowing why we’re doing it as much as what we should do is helpful. Learning should be used when it’s the best solution, of course. But we should be clear about what we’re doing.

I can see arguments why talking about workflow learning is good. It may be a way to get those not in our field to think about performance support. I can also see why it’s bad, leading us into the mistaken belief that we can learn while we do without breaking up our actions. I don’t have a definitive answer to “is workflow learning a myth” (so this would be an addition to the ‘misconceptions’ section of my myths book ;). What I think is important, however, is to unpack the concepts, so at least we’re clear about what learning is, about what workflow is, and when we should do either. Thoughts?

Sleep & Walking

6 August 2024 by Clark 2 Comments

We interrupt our regularly scheduled blog for this public service announcement. We will resume normal broadcasting after this brief message.

My late friend, Jay Cross, once wrote a post that said something to the effect of: “if you want to have better health, lose weight…<and a litany of other health benefits>…start walking.”  My reasons are in addition to that, actually. I also believe strongly in sleep. (Let me be clear, not sleep walking, of which I have no knowledge.) So here’re some thoughts on sleep & walking.

First, let’s talk sleep. I don’t know why (self-justification?), but I’ve regularly tracked the research on sleep. And, I find some robust results:

  • Most of us really are best off with 8 hours of sleep
  • Reading in the same place you sleep means you don’t read nor sleep as well
  • Keeping a regular sleep schedule helps
  • Naps are good

Also, of course, most people don’t do this. Personally, I try. It used to be about optimizing performance, but these days it’s more about maintaining performance! I can nap, though I usually don’t need to because of the first three. Also, I do try to get my eight hours (and am generally successful). I definitely don’t read in bed (tho’ occasionally I’ll get up to write something down so it’s off my brain and I can go back to sleep). And I try to be pretty regular in my sleep. I’m just following what’s recommended, and it seems to work. There’s more I’m not necessarily so good at, of course.

When it comes to walking, I don’t get it every day. That’s ok, because I try to exercise 5 days a week, and 3 of those are to use my torture device, er, exercise machine. Which I now do for 30 minutes 3 times a week, per the doc who asked for that much time at >100 beats per minute. As well as two strength things and some physio things to counteract my sedentary work life. I was doing 20+ minutes, with High Intensity Interval Training (10 of those mins are 30 secs intense, 30 secs not), and that’s still the case. I just extended the cool down.

The other two days a week I walk (sometimes more if we do it on our weekend). I have a set route, so my mind can be free. Annie Murphy Paul, whose book The Extended Mind I cited in my recent ‘post cognitive’ presentation (requires free membership) for the LDA, talks about the benefits of being out in nature. Of course, my walk is through my neighborhood, but it’s a bit wild (no sidewalks; wild animals can be spotted such as turkeys, hawks, quail, the occasional coyote).

My rationale for walking, however, in addition to health, is time to think! I come up with blog post topics, resolve questions, and more. Further, I don’t have headphones on, deliberately, so I’m aware but also allow what comes to mind. I also walk on the left side of the road, to face oncoming traffic, both a good idea and the law. (Too often I see folks walking with earphones, on the wrong side of the road, sometimes even with animals on a leash or a kid in a stroller! Yikes!)

We know that having time to reflect works. Being outside is also a boon. Together, it’s valuable time to think, as well as a healthy activity. I encourage you to follow good sleep practices and get in some walking (or equivalent, if there’re reasons that’s not possible). I’ve heard that walking conversations are also productive, but I work from home, so…

We now return you to your regularly scheduled blog, already in progress.

Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok