Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Vale Roger Schank

3 February 2023 by Clark 4 Comments

I’d first heard of Roger Schank’s work as an AI ‘groupie’ during my college years. His contributions to cognitive science have been immense. He was a challenging personality and intellect, and yet he fought for the right things. He passed away yesterday, and he will be missed.

Roger’s work connected story to cognition. He first saw how we had expectations about events owing to his experience at a restaurant with an unusual approach. At Legal Seafoods (at the time) you paid before being served (more like fast food than a sit-down venue). Surprised, Roger realized that there must be cognitive structures for events that were similar to the proposed schemas for things. He investigated the phenomena computationally, advancing artificial intelligence and cognitive science. Roger subsequently applied his thinking to education, writing Engines for Education (amongst other works), while leading a variety of efforts in using technology to support learning. He also railed against AI hype, accurately of course. I was a fan.

I heard Roger speak at a Cog Sci conference I attended to present part of my dissertation research. The controversy around his presentation caused the guest speaker, Stephen Jay Gould, to comment “you guys are weird”! His reputation preceded him; I had one of his PhD graduates on a team and he told me Roger was deliberately tough on them, saying “if you can survive me, you can survive anyone”.

I subsequently met up with Roger at several EdTech events hither and yon. In each he was his fiery, uncompromising self. Yet, he was also right. He was a bit of a contradiction: opinionated and unabashed, but also generous and committed to meaningful change. He also was a prodigious intellect; if you were as smart as him, I guess you had a reason to be self-confident. I got to know him a bit personally at those events, and then when he engaged me for advice to his company. He occasionally would reach out for advice, and always offer the same.

He could be irritating in his deliberate lack of social graces, but he was willing to learn, and had a good heart. In return, I learned a lot from him, and use some of his examples in my presentations. It was an honor to have known him, and the world will be a little duller, and probably a little dumber, without him. Rest in peace.

Coping with information

2 February 2023 by Clark Leave a Comment

I just finished reading Ross Dawson’s Thriving on Overload, and it’s a worthy read. The subtitle basically explains it: The 5 powers for success in a world of exponential information. The book has balance between principle and practice, with clear and cogent explanations. It’s not the only model for information management given the increasing challenge, but it’s a worthwhile read if you’re looking for help in coping with information deluge.

I’d heard Ross speak at an event, courtesy of my late friend Jay Cross. Ross is renown as a futurist, perceiving trends ahead of most folks. An Aussie (my 2nd home ;), I can’t say I really know him, but he has a well-established reputation, and keynotes around the world. He was perfectly coherent then and is again here.

Dawson frames elements in terms of how our brain works, which makes sense. He suggests: having an initial purpose, understanding the connections, filtering what’s coming in, paying attention to what’s important, and synthesizing what’s seen. Then, of course, it’s integrating them into a collective whole. He tosses in many interesting and useful observations along the way.

I’ve been, and remain, a fan of Harold Jarche’s Personal Knowledge Management (PKM). His framework is fairly simple – seek, sense, share – though the nuances make it powerful. He receives a mention, but I see some synergies. Harold takes the ‘purpose’ as implicit, and I see Dawson’s framing and synthesizing as both parts of Jarche’s ‘sense’. Similarly, I see Dawson’s attention and filtering as equivalent to Jarche’s ‘seek’. Where they differ most is, to me, where Jarche asks you to share out your learning, and Dawson’s is more personal.

Dawson’s steps are coherent, individually and collectively. As a fan of diagramming, I liked his focus on framing. He grounds much of his arguments in the natural ways our brains work, which I also am a fan of. I will quibble slightly at the end, where he says our brains are evolving to meet this new demand. If we use a metaphor between hardware and software, I’d agree that our brains adapt, but that’s not unique to information overload. What isn’t happening is our brain’s architecture changing. I think his claim maybe slightly misleading in that sense. A small quibble with a generally very good book.

Overall, I think the practices Dawson recommends are valuable and sound. In this era of increasing information, having practices that assist are critical. You can take Harold’s workshop, or read Ross’s book; both will give you useful skills. What you shouldn’t do is continue on without some systematic practices. If you’re looking for help coping with information, it’s available. Recommended.

 

Hyping the news

31 January 2023 by Clark Leave a Comment

I just saw another of these ‘n things you must…if you…’ headlines, and as usual it had the opposite effect they intended. I guess I’m a contrarian, because such headlines to me are an immediate warning. It happened to be in an area I know about, and I hadn’t done any of the necessary things. Yet, I have done the thing they were saying needed the prerequisites. Arguably well (do awards count?). It made me reflect on how we’re hyping the news. Some thoughts…

Yes, I know that such headlines are clickbait. ‘n‘ should be small. Yet when I tried to boil down Upside’s ‘deeper learning’ list for an infographic, it came to 14 items.  Inconvenient for hype,  I’m afraid, but what I’d put in the white paper. Of course there’s more, but I’m trying to be comprehensive, not ‘attractive’.  Similarly, when I created my EEA alignment, I had nine elements. Not because they were convenient for marketing, but because that’s what emerged from the work.

I similarly see lists for ‘the five things’, or the ‘8 things’ (somehow 8 seems to be a maximum, at least for marketing ;). What worries me about these lists is if they’re comprehensive. Is that really all? Have you ensured that they’re necessary and sufficient? Did you even have a process? It took four of us working through months to come up with the eight elements of the Serious eLearning Manifesto.  None of the above lists (Manifesto, EEA, deeper learning) are definitive, but they are the result of substantial work and thinking. Not just pulled together for a marketing push.

There are good lists, don’t get me wrong. Ones where people have worked to try to identify critical elements, or good choices based upon principled grounds. Typically, if it’s the case, there are pointers to the basis for these claims. Either there’s someone who’s known for work in the area, or they’re transparent about process. However, there are also lists where it’s clear someone’s just pulled together some random bits. Look for inconsistency, mismatches of types, etc.

In the broader picture, it’s clear that generating fear and outrage and sensationalism sell. I just want to demonstrate a resistance, and prefer a clear argument over a rant. (Here I’m trying to do the former, not the latter. ;) This goes with probably my broader prescription: I do want policy wonks making decisions. I really don’t want simple wrong answers to complicated questions no matter how appealing.

So, my short take is if you know the area, read with a critical eye. If you don’t, look for warning signs, and see what those who do know have to say about it. Caveat emptor. That’s my take on trying to stay immune to the hyping of news.

Learners as learning evaluators

24 January 2023 by Clark 7 Comments

Many years ago, I led the learning design of an online course on speaking to the media. It was way ahead of the times in a business sense; people weren’t paying for online learning. Still, there were some clever design factors in it. I’ve lifted one to new purposes, but also have a thought about how it could be improved. So here are some thoughts on learners as learning evaluators.

The challenge is the result of two conflicting challenges. For one, we want to support free answers on the part of learners. This is for situations where there’s more than one way to respond. For example a code solution, or a proposed social response. The other is the desire for auto-marking, that is independent asynchronous learning. While it’s ideal to have an instructor in the loop to provide feedback, the asynchronous part means that’s hard to arrange. We could try to have an intelligent programmed response (c.f. artificial intelligence), but those can be difficult to develop and costly. Is there another solution?

One alternative, occasionally seen, is to have the learner evaluate their response. There are positive benefits to this, as it gets learners to become self-evaluators. One of the mechanisms to support this is to provide a model answer to compare to the learners’ own response. We did this in that long-ago project, where learners could speak their response to a question, then listen to theirs and a model response.

There are some constraints on doing this; learners have to be able to see (or hear) their response in conjunction with the model response. I’ve seen circumstances where learners respond to complex questions and get the answer, but they don’t have a basis to compare. That is, they don’t get to see their own response, and the response was complex enough not to be completely remembered. One particular instance of this is in multiple response choices where you pick a collection out.

I want to go further, however. I don’t assume that learners will be able to effectively compare their response to the model response. At least, initially. As they gain expertise, they should, but early on they may not have the requisite support. You can annotate the model answer with the underlying thinking, but there’s another option.

I’m considering the value of having an extra rubric that states what you should notice about the model answer and prompts you to see if you have all the elements. I’m suggesting that this extra support, while it might add some cognitive load to the process, also reduces the load by supporting attention to the important aspects. Also, this is scaffolding that can be gradually removed, allowing learners to internalize the thinking.

I think we can have learners as learning evaluators, if we support the process appropriately. We shouldn’t assume that ability, at least initially, but we can support it. I’m not aware of research on this, though I certainly don’t doubt it. If you do know of some, please do point me to it! If you don’t, please conduct it! :D Seriously, I welcome your thoughts, comments, issues, etc.

Debating debates

17 January 2023 by Clark Leave a Comment

This is the year, at the LDA, of unpacking thinking (the broader view of my previous ‘exposure‘). The idea is to find ways to dig a bit into the underlying rationale for decisions, to show the issues and choices that underly design decisions. How to do that? Last year we had the You Oughta Know series of interviews with folks who represent some important ideas. This year we’re trying something new, using debates to show tradeoffs. Is this a good idea? Here’s the case, debating debates.

First, showing underlying thinking is helpful. For one, you can look at Alan Schoenfeld’s work on showing his thinking as portrayed in Collins & Brown’s Cognitive Apprenticeship. Similarly, the benefits are clear in the worked examples research of John Sweller. While it’s fine to see the results, if you’re trying to internalize the thinking, having it made explicit is helpful.

Debates are a tried and tested approach to issues. They require folks to explore both sides. Even if there’s already a reconciliation, I feel, it’s worth it to have the debate to unpack the thinking behind the positions. Then, the resolution comes from an informed position.

Moreover, they can be fun! As I recalled here, in an earlier debate, we agreed to that end. Similarly, in some of the debates I had with Will Thalheimer (e.g. here), we deliberately were a bit over-the-top in our discussions. The intent is to continue to pursue the fun as well as exposing thinking. It is part of the brand, after all ;).

As always, we can end up being wrong. However, we believe it’s better to err on the side of principled steps. We’ll find out. So that’s the result of debating debates. What positions would you put up?

Don’t make me learn!

10 January 2023 by Clark 1 Comment

In a conversation with a client, the book Don’t Make Me Think was mentioned. Though I haven’t read it, I’m aware of its topic: usability. The underlying premise also is familiar: make interfaces that use pre-existing knowledge and satisficing solutions. (NB: I used to teach interface design, having studied under one of the gurus.) However, in the context of the conversation, it made me also ponder a related topic: “don’t make me learn”. Which, of course, prompted some reflection.

There are times, I’ll posit, when we don’t want employees to be learning. There are times when learning doesn’t make sense. For instance, if the performance opportunities are infrequent, it may not make sense to try to have it in people’s heads. If there’s a resource people can use to solve the problem rather, than learning, that is probably a better answer. That is, in almost any instance, if the information can be in the world, perhaps it should.

One reason for this is learning, done properly, is hard. If a solution must be ‘in the head’ – available when needed and transferring to appropriate situations – there’ll likely be a fair bit of practice required. If it’s complex, much more so. Van Merriënboer’s Four Component Instructional Design is necessarily rigorous! Thus, we shouldn’t be training unless it absolutely, positively, has to be in the head when needed (such as in life-threatening situations such as aviation and medicine).

I’m gently pushing the idea that we should avoid learning as much as possible! Make the situation solvable in some other way. When people talk about ‘workflow learning’, they say that if it takes you out of the workflow, it’s not workflow. I’ll suggest that if it doesn’t, it’s not learning. Ok, so I’m being a bit provocative, but too often we err on the side of throwing training at it, even when it’s not the best solution. Let’s aim for the reverse, finding other solutions first. Turn to job aids or community (learning can be facilitated around either, as well), but stop developing learning as a default.

So, don’t make me learn, unless I have to. Fair enough?

Information to Miniscenarios

13 December 2022 by Clark 2 Comments

I’ve talked in the past about miniscenarios. By this, I mean rewriting multiple choice questions (MCQs) to actually be situations requiring decisions and choices thereto. I evangelize this, regularly. I’ve also talked about what you need from subject matter experts (SMEs). What I haven’t really done is talk about how how you map information to miniscenarios. So it’s time to remedy that.

So, first, let’s talk about the structure of a mini-scenario. I’ve suggested that it’s an initial context or story, in which a situation precipitates the need for a decision. There’s the right one, and then alternatives. Not random or silly ones, but ones that represent ways in which learners reliably go wrong. There’s also feedback, which is best as story-based consequences first, then actual conceptual feedback.

So what’s the mapping? One of the things we (should) get from SMEs are the contexts in which these decisions come to play.  Thus, the setting for the mini-scenario is one of these contexts. It may be made fantastic in story, but the necessary contextual elements have to exist. (“Pat had been recently promoted to line supervisor…”)

Then, we have the decisions the learners need to be able to make. These often come in the form of performance objectives. This forms the basis for choosing a situation that precipitates the decision, and the decision itself. (“The errors in manufacturing were higher than the production agreement stipulated. Pat:”) Also, at least, the correct answer. (* worked backward through the process.)

The wrong answers come from some other information we need from SMEs: misconceptions. These are the ways that individuals go wrong when performing. I’ve advocated before that you may want different types of SMEs. It may be that supervisors of the performers have more insight here than content experts. Regardless, you want to make these alternatives available as possible responses. You’ll want to address the difficultly of discrimination between alternatives as a way to manipulate the challenge of the task; it should be appropriate to the learners’ level. (*asked team members what they thought the problem was; *exhorted the team to pay more attention to quality).

The feedback starts with the consequences, which you should also get from SMEs. What happens when you get it right? What happens with each wrong answer? These may come from stories about wins and losses that you also want to collect. (“Pat’s team did not like the implicit claim that they weren’t working hard enough.”)

Finally, there’re the models that are the basis for good performance, and consequently also the basis for the feedback. These you should also collect, because you use them to explain why a choice is good or bad. You don’t want to just say right or wrong, learners need to understand the underlying reason to reinforce their understanding. (Which may also mean they also need to see their answer with the feedback, so they remember what they chose.) Importantly, they need specific feedback for each wrong answer, btw, so your implementation tool needs to support that!  (When investigating errors, don’t start with the team. We always look at the process first, as system flaws need to be eliminated first.)

Pretty much everything you need from SMEs plays a role in providing practice. Miniscenarios aren’t necessarily the best practice, but they’re typically available in your authoring environment.  Writing them isn’t necessarily as easy as generating typical recognition questions, but they more closely mimic the actual task, and therefore lead to better transfer. Plus, you’ll get better as you practice. So  know the mapping of information to miniscenarios, practice your miniscenario writing, and put it into play!

Exposing myself

29 November 2022 by Clark Leave a Comment

This blog is where I “learn out loud”. I try to show my thinking. Yet, I realize I haven’t been doing that as much as I should, or perhaps not as effectively and mindfully as I should  So here’re some reflections on sharing my thinking, or ‘exposing myself’.

Worked examples, as we know from John Sweller, are most effective when they come before practice. Similarly, in Cognitive Apprenticeship, instructors model the desired performance. And I do try to share my thinking. However, I may not have been doing it properly. It needs to unpack the expertise; it can’t be just the knowledge, but what the thinking behind it is. The expertise that experts don’t have access to!

For instance, in my workshops, I often show the outcome of what I (with a team) did, after I have learners do it. Other times, like in the recent L&D conference, I shared my deeper thinking on analyzing technology through affordance, but then didn’t challenge learners to apply that themselves (to be fair, I only had an hour; on the other hand, I didn’t properly prepare).

Increasingly, I think one of the ways to scale my impact is to share my thinking, in context. When people are practitioners, that may be enough, but at some times it may also help to provide ‘challenges’ and provide feedback as well. That, however, doesn’t scale as well. I can only review so many projects…

I guess my biggest take away is to be more conscious about making the underlying thinking clear. I think it’s helpful to learn out loud, but also to ensure that I’m showing the thinking. So, I’ll keep exposing myself…or, at least, my thinking. I’ll try to do it more explicitly and clearly, as well.

Conference Outcomes?

24 November 2022 by Clark Leave a Comment

Two months ago, I wrote about the L&D Conference we were designing. In all fairness, I reckon I should report on how it went, now that it’s finished. There are some definite learnings, which we hope to bring forward, both for the conference (should we run it again, which we intend), and for the Learning & Development Accelerator (LDA; the sponsoring org, of which I’m co-director with Matt Richter) activities as well. So here are some thoughts on the conference outcomes.

Our design was to have two tracks (basic and advanced) and a limited but world-class faculty to cover the topics. We also were looking not just to replicate what you get at typical face-to-face conferences (which we like as well), but to do something unique to the medium and our audience. Thus, we weren’t just doing one-off sessions on a topic. Instead, each was an extended experience, with several sessions spread out over days or weeks.

The results of that seemed to work well. While not everybody who attended one of the sessions on a topic attended all, there was good continuity. And the feedback has been quite good; folks appreciate the deep dive with a knowledgeable and articulate expert. This, we figure, is an important result that we’re proud of. If someone misses a session, they can always review the video (we’re keeping the contents available for the rest of the year).

Our social events, networking and trivia, didn’t do quite so well. The networking night did have a small attendance but the trivia night didn’t reach critical mass. We attribute this at least partly to it being a later thought, and not promoting from the get-go.

We struggled a bit with scheduling. First, we spread it across changes in countries that switch to/from daylight savings time. The platform we used didn’t manage that elegantly, and we owe a lot to a staffer who wrestled that into submission. Still, it led to some problems in folks connecting at the right time. On the other hand, having the courses spread out meant we didn’t collide, you could attend any sessions you want (the tracks were indicative, not prescriptive).

The platform also had one place to schedule events, but it was as web page. As a faculty member opined, they wished they could’ve loaded all the sesssions into their calendar with one click. I resonate with that, because in moments when I might’ve had spare bandwidth to attend a session, I’m more likely to look at my calendar rather than the event page. Not sure there’s an easy solution, of course. Still, folks were able to find and attend sessions.

We also didn’t get the social interaction between the sessions we’d hoped, though there was great interaction during the sessions. Faculty and participants were consistent in that perspective. There was a lot of valuable sharing of experiences, questions, and advice.

One thing that, post-hoc, I realize is that it really helps to unpack the thinking. The faculty we chose are those who’ve demonstrated an ability to help folk see the underlying thinking. That paid off well! However, we realize that there may be more opportunities. An interesting discussion arose in a closing event about the value of debates; where two folks who generally agree on the science find something to diverge on. Everyone (including the debaters), benefit from that.

We’re going to be looking to figure out how to do more unpacking, and share the ability to do the necessary critical thinking around claims in our industry. The LDA focuses on evidence-based approaches to L&D. That requires a bit more effort than just accepting status quo (and associated myths, snakeoil, etc), but it’s worth it for our professional reputation.

So those are my reflections on the L&D Conference outcomes. Any thoughts on this, from attendees or others?

Learning Science Bandwagon?

8 November 2022 by Clark Leave a Comment

I’m not alone in carrying the banner for learning science. Others are talking about evidence-based practices, making it stick, and more. This, I maintain, is a good thing. Is there too much of good thing? Is there a problem with a learning science bandwagon?

First, let’s be clear. There are some initiatives that do strike me as redundant or worse. For one, folks have been talking about neuroscience. And I think neuroscience research is quite interesting! What I also believe, buttressed by others, is that there haven’t been any results from neuroscience that are essential for learning science design. All the implications have been previously documented from learning science research at the cognitive or social level. Neuroscience is cool, but its use in learning design tends to be to draw attention (read: marketing), not for any new outcomes.

I feel similarly about the term brain-based. Yes, learning is brain-based. Isn’t it a wee bit redundant to say so? I suppose they’re implying that they’re aligned with how the brain works. Which is a good thing. Still, despite the alliteration, it seems a bit more like hype than being informative. As the saying goes, your mileage may vary.

However, I’m seeing more and more people now talking about learning science. That’s a good thing. Are they jumping on a bandwagon? Maybe, but there’s lots of room. As long as folks are actually digging into what the learning say, and not just paying lip-service, they’re welcome. To be clear, I don’t own the wagon anyway; I’m a practitioner, not a core researcher. So, I really do think it’s great if more and more people start talking, and walking, learning science.

I’ll go further, of course. I think we should be paying attention to what cognitive and learning sciences say about how we think, work, and learn. That is, the applicability of understanding how our minds work goes beyond learning to our overall organizational practices. But, hey, we gotta start somewhere, right? I think it’s good if we’re moving in the right direction. I can quibble that it’s slower than I’d like, but progress is progress.

So, yay, more learning science! C’mon, jump on, the learning science bandwagon; we’ve got space and it appears we’re moving forward. All good. Hope you’ll join us!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok