Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Archives for April 2017

Innovation Thoughts

27 April 2017 by Clark Leave a Comment

So I presented on innovation to the local ATD chapter a few weeks ago, and they did an interesting and nice thing: they got the attendees to document their takeaways. And I promised to write a blog post about it, and I’ve finally received the list of thoughts, so here are my reflections.  As an aside, I’ve written separate articles on L&D innovation recently for both CLO magazine and the Litmos blog  so you can check those out, too.

I started talking about why  innovation was needed, and then what it was.  They recalled that I pointed out that by definition an innovation is not only a new idea, but one that is implemented  and leads to better results.  I made the point that when you’re innovating, designing, researching, trouble-shooting, etc, you don’t know the answer when you start, so they’re  learning situations, though  informal,  not formal.  And they heard me note that agility and adaptation are premised on informal learning of this sort, and that the opportunity is for L&D to take up the mantle to meed the increasing need.

There was interest but some lack of clarity  around meta-learning. I emphasize that learning to learn may be your best investment, but  given that you’re devolving responsibility you shouldn’t assume that individuals are automatically possessed of optimal learning skills. The focus then becomes developing learning to learn skills, which of needs is done  across some other topic. And, of course, it requires the right culture.

There were some terms they heard that they weren’t necessarily clear on, so per the request, here are the terms (from them) and my definition:

  • Innovation by Design: here I mean deliberately creating an environment where innovation can flourish. You can’t plan for innovation, it’s ephemeral, but you can certainly create a felicitous environment.
  • Adjacent Possible: this is a term Steven Johnson used in his book Where Good Ideas Come From, and my take is that it means that lateral inspiration (e.g. ideas from nearby: related fields or technologies) is where innovation happens, but it takes exposure to those ideas.
  • Positive Deviance:  the idea (which I heard of from Jane Bozarth) is that the best way to find good ideas is to find people who are excelling and figure out what they’re doing differently.
  • Hierarchy and Equality: I’m not quite sure what they were referring to hear (I think more along the lines of  Husband’s Wirearchy versus hierarchy) but the point is to reduce the levels and start tapping into the contributions possible from all.
  • Assigned roles and vulnerability: I’m even less certain what’s being referred to here (I can’t be responsible for everything people take away ;), but I could interpret this to mean that it’s hard to be safe to contribute if you’re in a hierarchy and are commenting on someone above  you.  Which again is an issue of safety (which is why I advocate that leaders ‘work out loud’, and it’s a core element of Edmondson’s Teaming; see below).

I used the Learning Organization Dimensions diagram (Garvin, Edmondson & Gino)  to illustrate the components of successful innovation environment, and these were reflected in their comments. A number mentioned  psychological safety in particular as well as  the other elements of the learning environment. They also picked up on the importance of  leadership.

Some other notes that they picked up on included:

  • best principles instead of best practices
  • change is facilitated when the affected individual choose to  change
  • brainstorming needs individual work before collective work
  • that trust is required to devolve responsibility
  • the importance of coping with ambiguity

One that was provided  that I know I didn’t say because I don’t believe it, but is interesting as a comment:

“Belonging trumps diversity, and security trumps grit”

This is an interesting belief, and I think that’s likely the case if it’s  not safe to experiment and make mistakes.

They recalled some of the books I mentioned, so here’s the list:

  • The Invisible Computer  by Don Norman
  • The Design of Everyday Things  by Don Norman
  • My  Revolutionize Learning and Development  (of course ;)
  • XLR8 by John Kotter (with the ‘dual operating system‘ hypothesis)
  • Teaming to Innovate by Amy Edmondson (I reviewed it)
  • Working Out Loud by John Stepper
  • Scaling Up Excellence by Robert I. Sutton and Huggy Rao (blogged)
  • Organize for Complexity by Niels Pflaeging (though they heard this as a concept, not a title)

It was a great evening, and really rewarding to see that many of the messages stuck.  So, what are your thought around innovation?

 

Human Learning is Not About to Change Forever

26 April 2017 by Clark 1 Comment

In my inbox was an announcement about a new white paper with the intriguing title  Human Learning is About to Change Forever.  So naturally I gave up my personal details to download a copy.  There are nine claims in the paper, from the obvious to the ridiculous. So I thought I’d have some fun.

First, let’s get clear.  Our learning runs on our brain, our wetware. And that’s not changing in any fundamental way in the near future. As a famous article once had it: phenotypic plasticity triumphs over genotypic plasticity (in short, our human advantage has gained    via  our ability to adapt individually and learn from each other, not through  species evolution).   The latter takes a long time!

And as a starting premise, the “about to” bit implies these things are around the corner, so that’s going to be a bit of my critique. But nowhere near  all of it.  So here’s a digest of the  nine claims and my comments:

  1. Enhanced reality tools will transform the learning environment.  Well, these tools will  certainly augment the learning environment  (pun intended :). There’s evidence that VR leads to better learning outcomes, and I have high hopes for AR, too. Though is that a really fundamental transition? We’ve had VR and virtual worlds for over a decade at least.  And is VR a evolutionary or revolutionary change from simulations? Then they go on to talk about performance support. Is that transforming learning? I’m on record saying contextualized learning (e.g. AR) is the real opportunity to do something interesting, and I’ll buy it, but we’re a long way away. I’m all for AR and VR, but saying that it puts learning in the hands of the students is a design issue, not a technology issue.
  2. People will learn collaboratively, no matter where they are.  Um, yes, and…?  They’re already doing this, and we’ve been social learners for as long as we’ve existed. The possibilities in virtual worlds to collaboratively create in 3D I still think is potentially cool, but even as the technology limitations come down, the cognitive limitations remain. I’m big on social learning, but mediating it through technology strikes me as just a natural step, not transformation.
  3. AI will banish intellectual tedium. Everything is  awesome.  Now we’re getting a wee bit hypish. The fact that software can parse text and create questions is pretty impressive. And questions about semantic knowledge aren’t going to transform education. Whether the questions are developed by hand, or by machine, they aren’t likely on their own to lead to new abilities to do. And AI is not yet to the level (nor will it be soon) where it can take content and create compelling activities that will drive learners to apply knowledge and make it meaningful.
  4. We will maximize our mental potential with wearables and neural implants. Ok, now we’re getting confused and a wee bit silly. Wearables are cool, and in cases where they can sense things about you and the world means they can start doing some very interesting AR. But transformative? This still seems like a push.  And neural implants?  I don’t like surgery, and messing with my nervous system when you still don’t really understand it? No thanks.  There’s a lot more to it than managing to adjust firing to control limbs. The issue is again about the semantics: if we’re not getting meaning, it’s not really fundamental. And given that our conscious representations are scattered across our cortex in rich patterns, this just isn’t happening soon (nor do I want that much connection; I don’t trust them not to ‘muck about’).
  5. Learning will be radically personalized.  Don’t you just love the use of superlatives?  This is in the realm of plausible, but as I mentioned before, it’s not worth it until we’re doing it on  top of good design.  Again, putting together wearables (read: context sensing) and personalization will lead to the ability to do transformative AR, but we’ll need a new design approach, more advanced sensors, and a lot more backend architecture and semantic work than we’re yet ready to apply.
  6. Grades and brand-name schools won‘t matter for employment.  Sure, that MIT degree is worthless! Ok, so there’s some movement this way.  That will actually be a nice state of affairs. It’d be good  if we started focusing on competencies, and build new brand names around real enablement. I’m not optimistic about the prospects, however. Look at how hard it is to change K12 education (the gap  between what’s known and what’s practiced hasn’t significantly diminished in the past decades). Market forces may change it, but the brand names will adapt too, once it becomes an economic necessity.
  7. Supplements will improve our mental performance.  Drink this and you’ll fly! Yeah, or crash.  There are ways I want to play with my brain chemistry, and ways I don’t. As an adult!  I really don’t want us playing with children, risking potential long-term damage, until we have a solid basis.  We’ve had chemicals support performance for a while (see military use), but we’re still in the infancy, and here I’m not sure our experiments with neurochemicals can surpass what evolution has given us, at least not without some pretty solid understanding.  This seems like long-term research, not near-term plausibility.
  8. Gene editing will give us better brains.  It’s  alive!  Yes, Frankenstein’s monster comes to mind here. I do believe it’s possible that we’ll be able to outdo evolution eventually, but I reckon there’s still not everything known about the human genome  or the human brain. This similarly strikes me as a valuable long term research area, but in the short term there are so many interesting gene interactions we don’t yet understand, I’d hate to risk the possible side-effects.
  9. We won‘t have to learn: we‘ll upload and download knowledge. Yeah, it’ll be  great!  See my comments above on neural implants: this isn’t yet ready for primetime.  More importantly, this is supremely dangerous. Do I trust what you say you’re making available for download?  Certainly not the case now with many things, including advertisements. Think about downloading to your computer: not just spam ads, but viruses and malware.  No thank you!  Not that I think it’s close, but I’m not convinced we can ‘upgrade our operating system’ anyway. Given the way that our knowledge is distributed, the notion of changing it with anything less than practice seems implausible.

Overall, this is reads like more a sci-fi fan’s dreams than a realistic assessment of what we should be preparing for.  No, human learning isn’t going to change forever.  The ways we learn, e.g. the tools we learn with are changing, and we’re rediscovering how we really learn.

There are better guides available to what’s coming in the near term that we should prepare for.  Again, we need to focus on good learning design, and leveraging technology in ways that align with how our brains work, not trying to meld the two.  So, there’re my opinions, I welcome yours.

Workplace of the Future video

25 April 2017 by Clark 2 Comments

Someone asked for a video on the  Workplace of the Future  project, so I created one. Thought I’d share it with you, too.  Just a walkthrough with some narration, talking about some of the design decisions.

One learning for me (that I’m sure you knew): a script really helps!  It took multiple tries, for a variety of reasons.  I’m not a practiced video creator, so gentle, please!

Top 10 Tools for @C4LPT 2017

19 April 2017 by Clark Leave a Comment

Jane Hart is running her annual Top 100 Tools for Learning poll  (you can vote too), and here’s my contribution for this year.  These  are my personal learning tools, and are ordered  according to Harold Jarche’s Seek-Sense-Share models, as ways to find answers, to process them, and to share for feedback:

  1. Google Search is my go-to tool when I come across something I haven’t heard of. I typically will choose the Wikipedia link if there is one, but also will typically open several other links and peruse across them to generate a broader perspective.
  2. I use GoodReader on my iPad to read PDFs and mark up journal submissions.  It’s handy for reading when I travel.
  3. Twitter  is one of several ways I keep track of what people are thinking about and looking at. I need to trim my list again, as it’s gotten pretty long, but I keep reminding myself it’s drinking from the firehose, not full consumption!  Of course, I share things there too.
  4. LinkedIn is another tool I use to see what’s happening (and occasionally engage in). I have a group for the Revolution,  which largely is me posting things but I do try to stir up conversations.  I also see and occasionally comment on posting by others.
  5. Skype  let’s me  stay in touch with my ITA colleagues, hence it’s definitely a learning tool. I also use it occasionally to have conversations with folks.
  6. Slack is another tool I use with some groups  to stay in touch. People share there, which makes it useful.
  7. OmniGraffle is my diagramming tool, and diagramming is a way I play with representing my understandings. I will put down some concepts in shapes, connect them, and tweak until I think I’ve captured what I believe. I also use it to mindmap keynotes.
  8. Word is a tool I use to play with words as another way to explore my thinking. I use outlines heavily and I haven’t found a better way to switch between outlines and prose. This is where things like articles, chapters, and books come from. At least until I find a better tool (haven’t really got my mind around Scrivener’s organization, though I’ve tried).
  9. WordPress is my blogging tool (what I’m using here),  and serves both as a thinking tool (if I write it out, it forces me to process it), but it’s also a share tool (obviously).
  10. Keynote is my presentation tool. It’s where I’ll noodle out ways to share my thinking. My presentations  may get rendered to Powerpoint eventually out of necessity, but it’s my creation and preferred presentation tool.

Those are my tools, now what are yours?  Use the link to let Jane know, her collection and analysis of the tools is always interesting.

What you learn not as important as how you learn!

18 April 2017 by Clark Leave a Comment

I’m going a bit out on a limb here, with a somewhat heretical statement: what you learn is more important than how you learn!  (You could say pedagogy supersedes curricula, but that’s just being pedantic. ;)  And I’m pushing the boundaries of the concept a bit, but I think it’s worth floating as an idea. It’s meta-learning, of course, learning how to learn!   The important point is to focus on what’s being developed.  And I mean this at two levels.

This was triggered by seeing two separate announcements of new learning opportunities.  Both are focused on current skills, so both are focusing on advanced curricula, things that are modern. While the pedagogy of one isn’t obvious (though claimed to be very practical), the other clearly touts the ways in which the learning happens. And it’s good.

So the pedagogy is very hands on. In fact, it’s an activity-based curricula (in my terms), in that you progress by completing assignments very closely tied to what you’ll do on the job. There are content resources available (e.g. expert videos) and instructor feedback, all set in a story.  And this is better than a content-based curricula, so this pedagogy is really very apt for preparing people to do jobs.  In fact, they are currently applying it across three different roles that they have determined  are necessary.

But if you listen to the longer version  (video) of my activity-based learning curricula story, you’ll see I carry the pedagogy forward. I talk about handing over responsibility to the learner, gradually, to take responsibility for the activities, content, product, and reflection.  This is important for learners to start becoming self-improving learners.  The point is to develop their ability to do  meta-learning.

To do so, by the way, requires that you make your pedagogy visible  for the choices that you made, and why.  Learners, to adopt their own pedagogy, need to see  a pedagogy. If you narrate your pedagogy, that is document your alternatives and rationales of choices, they can actually understand more about the learning process itself.

And this, to me, is the essence of the claim. If you start a learning process about  something, and then hand off responsibility for the learning, while making clear the choices that led there, learners become self-learners. The courses that are designed in the above two cases  will, of necessity, change. And graduates from those courses might be out of date before long,  unless they’ve learned  how to stay current. Unless they’ve learned meta-learning.  That can be added in, and it may be implicit, but I’ll suggest that learning to learn is a more valuable long-term outcome than the immediate employability.

So that’s my claim: in the long term, the learner (and society) will be better off if the learner can learn to self-improve.  It’s not an immediate claim or benefit, but it can be wrapped around something that  is of immediate benefit.  It’s the ‘secret sauce’ that organizations could be adding in, whether internally or in their offerings. What surprises me is how seldom I see this approach taken, or even discussed.

Artificial Intelligence or Intelligence Augmentation

12 April 2017 by Clark Leave a Comment

In one of my networks, a recent conversation has been on Artificial Intelligence (AI) vs Intelligence Augmentation (IA). I’m a fan of both, but my focus is more on the IA side. It triggered some thoughts that I penned to them and thought I’d share here [notes to clarify inserted with square brackets like this]:

As context, I‘m an AI ‘groupie’, and was a grad student at UCSD when Rumelhart and McClelland were coming up with PDP (parallel distributed processing, aka connectionist or neural networks). I personally was a wee bit enamored of genetic algorithms, another form of machine learning (but a bit easier to extract semantics, or maybe just simpler for me to understand ;).

Ed Hutchins was talking about distributed cognition at the same time, and that remains a piece of my thinking about augmenting ourselves. We don‘t do it all in our heads, so what can be in the world and what has to be in the head?  [the IA bit, in the context of Doug Engelbart]

And yes, we were following fuzzy logic too (our school was definitely on the left-coast of AI ;).  Symbolic logic was considered passe‘! Maybe that‘s why Zadeh [progenitor of fuzzy logic] wasn‘t more prominent here (making formal logic probabilistic may have seemed like patching a bad core premise)?  And I managed (by hook and crook, courtesy of Don Norman ;) to attend an elite AI convocation held at an MIT retreat with folks like McCarthy, Dennett, Minsky, Feigenbaum, and other lights of both schools.  (I think Newell were there, but I can‘t state for certain.)  It was groupie heaven!

Similarly, it was the time of emergence of ‘situated cognition‘ too (a contentious debate with proponents like Greeno and even Bill Clancy while old school symbolics like Anderson and Simon argued to the contrary).  Which reminds me of Harnad‘s Symbol Grounding problem, a much meatier objection to real AI than Dreyfuss’ or the Chinese room concerns, in my opinion.

I do believe we ultimately  will achieve machine consciousness, but it‘s much further out than we think. We‘ll have to understand our own consciousness first, and that‘s going to be tough, MRI and other such research not withstanding. And it may mean simulating our cognitive architecture on a sensor equipped processor that must learn through experimentation and feedback as we do. e.g. taking a few years just to learn to speak! (“What would it take to build a baby” was a developmental psych assignment I foolishly attempted ;)

In the meantime, I agree with Roger Schank (I think he was at the retreat too) that most of what we‘re seeing, e.g. Watson, is just fast search, or pattern-learning. It‘s not really intelligent, even if it‘s doing it like we do (the pattern learning). It‘s useful, but it‘s not intelligent.

And, philosophically, I agree with those who have stated that we must own the responsibility to choose what we take on and what we outsource. I‘m all for self-driving vehicles, because the alternative is pretty bad (tho‘ could we do better in driver training or licensing, like in Germany?).  And I do want my doctor augmented by powerful rote operations that surpass our own abilities, and also by checklists and policies and procedures, anything that increases the likelihood of a good diagnosis and prescription.  But I want my human doctor in the loop.  We still haven‘t achieved the integration of separate pattern-matching, and exception handling, that our own cognitive processor provides.

Classical and Rigorous

11 April 2017 by Clark 2 Comments

A recent twitter spat led me to some reflections, and I thought I’d share.  In short, an individual I do not know attacked one of my colleague Harold’s diagrams, and  said that they stood against “everything classical and rigorous”.  My somewhat flip comment was that “the classical and rigorous is also outdated and increasingly irrelevant. Time for some new thinking”.  Which then led to me being accused of spreading BS. And I don’t take kindly to someone questioning my integrity. (I’m an ex-academic after all! ;) I thought I should point out why I said what I said.

Theories change.  We used to believe that the sun circled the earth, and that the world was flat. More relevantly, we used to have management theories that optimized using people as  machines.  And typical business thinking is still visible in ways that are hierarchical and mechanical.  We continue to see practices like yearly reviews, micromanagement, incentives for limited performance metrics, and curtailed communication. They worked in an industrial age, but we’re in a new environment, and we’re finding that we need new methods.

And, let me add, these old practices are not aligned with what we know about how our brains work.  We’ve found that the best outcomes come from people working in environments where it’s safe to share. Also, we get better results when we’re collaborating, not working independently. And better outcomes occur when we’re given purpose and autonomy to pursue, not micromanagement.  In short, many of the classical approaches, ones that are rigorously defined and practiced, aren’t optimal.

And it’s not just me saying this. Respected voices are pointing in new directions based upon empirical research.  In XLR8, Kotter’s talking about leveraging more fluid networks for innovation to complement the hierarchy. In  Teaming, Edmondson is pointing to more collective ways to work. And in Scaling Up Excellence, Sutton & Rao point to more viral approaches to change rather than the old monolithic methods. The list goes on.

Rigor is good. Classical, in the sense of tested and proven methods, is  good. But times change, and our understanding expands. Just yesterday I listened to Charles Reigeluth (a respected learning design theorist) talk about how theories change. He described how most approaches have an initial period where they’re being explored and results may not be optimal, but you continue to refine them and ultimately the results can  supersede previous approaches.  Not all approaches will yield this, but it appears to me that we’re getting convergent evidence on theoretical and empirical grounds that the newer approaches to business, as embodied in stuff like Harold’s diagrams and other representations  (e.g. the Revolution book), are more effective.

I don’t knowingly push stuff I don’t believe is right. And I try to take a rigorous approach to make sure I’m avoiding confirmation bias and other errors. It’s got to align with sound theory, and pass scrutiny in the methodology.  I try to be the one cutting through the BS!  I stand behind my claim that new ways of working are an improvement over the old ways.  Am I missing something?

 

Exploration Requirements

5 April 2017 by Clark Leave a Comment

In yesterday’s post, I talked about how new tools need to be coupled with practices to facilitate exploration. And I wanted to explore more (heh) about  what’s required.  The metaphor is old style exploration, and the requirements to succeed. Without any value judgment on the motivations that drove this exploitation, er, exploration ;). I’m breaking it up into tools, communication, and support.

Tools

Old mapSo, one of the first requirements was to have the necessary tools to explore. In the old days that could include  means to navigate (chronograph, compass), ways to represent learnings/discoveries (map, journal), and resources (food, shelter, transport). It was necessary to get to the edge of the map, move forward, document the outcomes, and successfully return. This hasn’t changed in concept.

So today, the tools are different, but the requirements are similar. You need to figure out what you don’t know (the edge of the map), figure out how to conduct an experiment (move forward), measure the results (document outcomes), and then use that to move on. (Fortunately, the ‘return’ part isn’t a problem so much!)  The digital business platform is one, but also social media are necessary.

Communication

What happened after these expeditions was equally important. The learnings were brought back, published, and presented and shared. Presented at meetings, debates proceeded about what was learned: was this a new animal or merely a variation? Does this mean we need to change our explanations of animals, plants, geography, or culture?  The writings exchanged in letters, magazines, and books explored these in more depth.

These days, we similarly need to communicate our understandings. We debate via posts and comments, and microblogs. More thought out ideas become presentations at conferences, or perhaps  white papers  and  articles. Ultimately, we may write books to share our thinking.  Of course, some of it is within the organization, whether it’s the continual dialog around a collaborative venture, or ‘show  your work’ (aka work out loud).

Support

Such expeditions in the old days were logistically complex, and required considerable resources. Whether funded by governments, vested interests, or philanthropy, there was an awareness of risk and rewards. The rewards of knowledge as well as potential financial gain were sufficient to drive expeditions that ultimately spanned and opened the globe.

Similarly, there are risks and rewards in continual exploration on the part of organizations, but fortunately the risks are far less.  There is still a requirement for resourcing, and this includes official support and a budget for experiments that might fail. It has to be safe to take these risks, however.

These elements need to be aligned, which is non-trivial. It requires crossing silos, in most cases, to get the elements in place including IT, HR, and operations.  That’s where strategy, culture, and infrastructure can come together to create an agile, adaptive organization that can thrive in uncertainty. And isn’t that where you need to be?

Continual Exploration

4 April 2017 by Clark Leave a Comment

CompassI was reading about Digital Business Platforms, which is  a move away from   siloed IT systems  to create a unified environment. Which, naturally, seems like a sensible thing to do. The benefits are  about continual innovation, but I wonder if a more apt phrase is  instead  continual exploration.

The premise is that  it’s now possible to migrate from separate business systems and databases, and converge that data into a unified platform. The immediate benefits are that you can easily link information that was previously siloed, and track real time changes. The upside is the ability to try out new business models easily.  And while that’s a good thing, I think it’s not going to get fully utilized out of the box.

The concomitant component, it seems to me, is the classic ‘culture’ of learning. As I pointed out in last week’s post, I think that there are significant  benefits to leveraging the power of social media to unleash organizational outcomes. Here, the opportunity is to facilitate easier experimentation. But that takes more than sophisticated tools.

These tools, by integrating the data, allow new combinations of data and formulas to be tried and tested easily. This sort of experimentation is critical to innovation, where small trials  can be conducted, evaluated, and reviewed to refine or shift direction.   This sort of willingness to make trials, however, isn’t necessarily going to be successful in all situations.  If it’s not safe to experiment, learn from it, and share those learnings, it’s unlikely to happen.

Thus, the willingness to continually experiment is valuable. But I wonder if a better mindset is exploration. You don’t want to just experiment, you want to map out the space of possibilities, and track the outcomes that result from different ‘geographies’.  To innovate, you need to try new things. To do that, you need to know what the things are you  could try, e.g .the places you haven’t been and perhaps look promising.

It has to be safe to be trying out different things. There is trust and communication required as well as resources and permission. So here’s to systematic experimentation to yield continual exploration!

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok