Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Ownership versus ubiquity

13 September 2011 by Clark Leave a Comment

The notion that soon everything will be in the cloud, and we’ll just use an interface surface near us is not new.  The notion is that the technology will recognize you and present your environment, ready for you to accomplish your goals.  This is a nice idea, and I can see it working, but it’s not trivial.

Contrast this to the element that Judy Brown talks about as important component of mobile learning.  For her, mobile devices have to be something you’re familiar with and have with you all the time.  And that, to me, is the sticking point.

With an interface surface you come upon, would you necessarily recognize the different ways the interface would manifest?  You don’t want a big touchscreen (despite Minority Report  imaginings) for very complex work, because the research shows your arms fatigue too quickly. So you might have a keyboard on some devices.  And the variety could be high.  And, yes, it’s your interface, but with all the different possible form-factors, could you make it comprehensible?  And you’re still at the mercy of availability of surfaces (kinda like waiting in line for computers to check email at conferences has been).

Now, I can see having a mobile device and  then using an accessible interface that recognizes you by the device proximity, so you’re not stuck. And I can imagine that it would be possible to make a scalable interface (just not necessarily easy).  I do wonder, however, about some surfaces being so designed for aesthetics that the usability is compromised (c.f. The Design of Everyday Things).

And, particularly for my notion of slow learning  (which I need to augment with ubiquity and personalization – quick, I need a new phrase! :), the ability for a device to be with you may be required to do the teachable moment  thing.  That is, having a context-sensitive device right that at the appropriate place and time may be needed to really develop us in the ways we deserve.

So I don’t take that vision of ubiquitous computing surfaces at face value, I think that there are some reasons why mobile devices may still make sense.  Which isn’t to say there’s not a way, but I’m still holding out for something with  me.

Layering learning

8 September 2011 by Clark 3 Comments

Electronic Performance Support Systems are a fabulous concept, as pioneered by Gloria Gery back in the early 90’s.  The notion is that as you use a system, and have entries or decisions to make, there are tools available that can provide guidance: proactively, intelligently, and context-appropriate.  Now, as I heard the complaint at the time, this would really be just good interface design, but the fact is that many times you have to retrofit assistance on top of a bad design for sad but understandable reasons.

The original were around desktop tasks, but the concept could easily be decoupled from the workplace via mobile devices.  One of my favorite examples is the GPS system: the device knows where you are, and where you want to go (because you told it), and it gives you step by step guidance, even recalculating if you make a change.  Everything from simple checklists to full adaptive help is possible, and I’ve led the design of such systems.

One of the ideas implicit in Gery’s vision, however, that I really don’t  see, is the possibility of having the system not only assist you in performing, but also help you learn. She talked about the idea in her book on the subject, though without elaborating how that would happen, but her examples didn’t really show it and I haven’t seen it in practice in the years since.  Yet the possibility is there.

I reckon it wouldn’t really take much. There is (or should be) a model guiding the decisions about what makes the right step, but that’s often hidden (in our learning as well).  Making that model visible, and showing how it guides the support and recommendations that are made, could be made available as a ‘veneer’ over the system. It wouldn’t have  to be visible, it could just be available at a click or as a preference for those who might want it.

Part of my vision of how to act in the world is to ‘learn out loud’. Well, I think our tools and products could be more explicit about the thinking that went into them, as well.  Many years ago, in HyperCard, you could just use buttons and field, but you could open them up and get deeper into them, going from fixed links to coded responses.  I have thought that a program or operating system could work similarly, having an initial appearance but capable of being explored and customized.  We do this in the real world, choosing how much about something we want to learn (and I still want everyone  who uses a word processor to learn about styles!) about something. Some things we pay someone else to do, other things we want to do ourselves. We learn about some parts of a program, and don’t know about others (it used to be joked that no one knows everything about Unix, I feel the same way about Microsoft Word).

We don’t do enough performance support as it is, but hopefully as we look into it, we consider the possible benefits of supporting the performance with some of the underlying thinking, and generating more comprehension with the associated benefits that brings. It’s good to reflect on learning, and seeing how thinking shapes performance both improves us and can improve our performance as well.

Digital Helplessness(?)

5 August 2011 by Clark 3 Comments

Recently, I’ve been hearing quite a bit of concern over the possibility that reliance on digital, and increasingly mobile, technology may make us stupider.  And I don’t think this is just easy to dismiss.  In a sense, it could be a case of learned helplessness, where folks find themselves helpless  because after using the tools, folks might not have the information they need when they don’t have the tools.

Recently announced research    shows that folks change what they remember when enabled with search engines: they don’t remember the data, but instead how to find it.  Which could be a problem if they needed to know the data and are not digitally enabled in some context.

As has also been conveyed to me as a concern is whether folks might not engage in learning about their environs (e.g. when traveling), and in other ways miss out on opportunities to learn when dependent on digital devices.  Certainly, as I’ve mentioned before, I’ve been concerned  about how disabled I feel when dissociated from my digital support (my external brain).  Yet is there a concern?

My take is that it might be a concern if people are doing it unconsciously.  I think you could miss out (as m’lady points out when I am reading instead of staring out the window every moment as we take the train through another country :) on some opportunities to learn.

On the other hand, if you are choosing consciously what you want to remember, and what you want to leave to the device, then I think you’re making a choice about how you allocate your resources (a ‘good thing’).  We do this in many ways in our lives already, for instance how much we choose to learn about cooking, and more directly related, how much to learn about how to do formatting in a word processing program.

Yes, I’ve been frustrated without my support when traveling, but that’s chosen (which does not undermine my dismay at the lack of ability to access digital data overseas).  I guess I’m arguing for chosen helplessness :).  So, what are you choosing to learn and what to devolve to resources?

Think like a publisher

2 May 2011 by Clark Leave a Comment

Way  back when we were building the adaptive learning system tabbed Intellectricityâ„¢, we were counting on a detailed content model that carved up the overall content into discrete elements that could be served up separately to create a unique learning experience.  As I detailed in an article, issues included granularity and tagging vocabulary.  While my principle for the right level of granularity is playing a distinct role in the learning experience, e.g. separating a concept presentation from an example from a practice element, my more simple heuristic is to consider “what would a knowledgeable mentor give to one learner versus another”. The goal, of course, is to support future ability to personalize and customize the learning experience.

Performance Ecosystem

Back then, we were thinking then as a content delivery engine, but our constraints required content produced in a particular format, and we were thinking about how we’d get content produced the way we needed.  Today, I’m still thinking that the advantages of content produced in discrete chunks, under a tight model, is a valuable investment in time and energy.  Increasingly, I’m seeing publishers taking a similar view, and as new content formats get developed and delivered (e.g. ebooks, mobile web), the importance of more careful attention to content makes sense.

The benefits of more careful articulation of content can go further. In the performance ecosystem model (PDF), the greater integration step is specifically around more tightly integrating systems and processes.  While this includes coupling the disparate systems into a coherent workbench for individuals, it also includes developing content into a model that accounts for different input sources, output needs, and governance.  While this is largely for formal content, it could be community-generated content as well.  The important thing is to stop redundant content development.  Typically, marketing generates requirements, and engineering develops specifications, which then are fed separately to documentation, sales training, customer training, and support, which all generate content anew from the original materials.  Developing into and out of a content model reduces errors and redundancy, and increases flexibility and control.  (And this is not incommensurate with devolving responsibility to individuals.)

We’re already seeing the ability to create custom recommendations (e.g. Amazon, Netflix), and companies are already creating custom portals (e.g. IBM).  The ability to begin to customize content delivery will be important for customer service, performance support, and slow learning.  Whether driven by rules or analytics (or hybrids), semantic tagging is going to be necessary, and that’s an concomitant requirement of content models.  But the upside potential is huge, and will eventually be a differentiator.

Learning functions in organizations need to be moving up the strategic ladder in terms of their overall responsibility for more than just formal learning, but also performance support and ecommunity.  Thinking like advanced publishers can and should be about moving beyond the text, and even beyond content, to the experience.  While that could be custom designs (and in some cases it must be, e.g. simulation games), for content curators and providers it also has to be about flexible business models and quality development.  I believe it’s a must for other organizations as well.  I encourage you to start thinking strategically about content development in rich ways that stop with one-off development, and start thinking about putting some up-front effort into not only templates, but also models with tight definitions and labels.

New Horizon Report: Alan Levine – Mindmap

20 April 2011 by Clark 3 Comments

This evening I had the delight to hear Alan Levine present the New Media Consortium’s New Horizon Report for 2011 to the ASTD Mt. Diablo chapter.  As often happens, I mindmapped it.  Their process is interesting, using a Delphi approach to converge on the top topics.

For the near term (< 1 year), he identified the two major technologies as ebooks and mobile devices (with a shoutout for my book: very kind).  For the medium term (2-3 years), he pointed to augmented reality and game-based learning (though only barely touching on deeply immersive simulations, which surprised me).  For the longer term (4-5 years), the two concepts were gesture-based computing and learning analytics.

A very engaging presentation.

mind map of Levine talk

Learning Experience Design thru the Macroscope

7 April 2011 by Clark 11 Comments

Our learning experience design is focused, essentially, on achieving one particular learning objective.  At the level of curricular design, we are then looking at sequences of learning objectives that lead to aggregate competencies.  And these are delivered as punctate events.  But with mobile technologies, we have the capability to truly start to deliver what I call ‘slow learning’: delivering small bits of learning over time to really develop an individual.  It’s a more natural map to how we learn; the event model is pretty broken.  Most of our learning comes from outside the learning experience.  But can we do better?

Really, I don’t think we have a handle on designing and delivering a learning experience that is spaced over time, and layered over our real world activities, to develop individuals in micro bits over a macro period of time rather than macro bits over a micro bit of time (which really doesn’t work).  We have pieces of the puzzle ( smaller chunks, content models) and we have the tools (individualized delivery, semantics), but putting them together really hasn’t been done yet.

Conceptually, it’s not hard, I reckon.  You have more small chunks of content, and more distributed performance model. You couple it with more self-evaluation, and you design a system that is patiently persistent in assisting people and supporting them along.  You’d have to change your content design, and provide mechanisms to recognize external content and real performance contexts as learning experiences.  You’d want to support lots of forms of equivalency, allowing self-evaluation against a rubric to co-exist with mentor evaluation.

There are some consequences, of course.  You’d have to trust the learner, they’d have to understand the value proposition, it’s a changed model that all parties would have to accommodate.  On the other hand, putting trust and value into a learning arrangement somehow feels important (and refreshingly different :).  The upside potential is quite big, however: learning that sticks, learners that feel invested in, and better organizational outcomes.  It’s really trying to build a system that is more mentor like than instructor like.  It’s certainly a worthwhile investigation, and potentially a big opportunity.

The point is to take the fact that technology is no longer the limit, our imaginations are. Then you can start thinking about what we would really want from a learning experience, and figure out how to deliver it.  We still have to figure out what our design process would look like, what representations we would need to consider, and our associated technology models, but this is doable.  The possibility is now well and truly on the table, anyone want to play?  I’m ready to talk when you are.

Clarity needed around Web 3.0

25 February 2011 by Clark 6 Comments

I like ASTD; they offer a valuable service to the industry in education, including reports, webinars, very good conferences (despite occasional hiccups, *cough* learning styles *cough*) that I happily speak at and even have served on a program committee for.     They may not be progressive enough for me, but I’m not their target market.   When they come out with books like The New Social Learning, they are to be especially lauded.   And when they make a conceptual mistake, I feel it’s fair, nay a responsibility, to call them on it.   Not to bag them, but to try to achieve a shared understanding and move the industry forward.   And I think they’ve made a mistake that is problematic to ignore.

A recent report of theirs, Better, Smarter, Faster: How Web 3.0 will Transform Learning in High-Performing Organizations, makes a mistake in it’s extension of a definition of Web 3.0, and I think it’s important to be clear.   Now, I haven’t read the whole report, but they make a point of including their definition in the free Executive Summary (which I *think* you can get too, even if you’re not a member, but I can’t be sure).   Their definition:

Web 3.0 represents a range of Internet-based services and technologies that include components such as natural language search, forms of artificial intelligence and machine learning, software agents that make recommendations to users, and the application of context to content.

This I almost completely agree with.   The easy examples are Netflix and Amazon recommendations: they don’t know you personally, but they have your purchases or rentals, and they can compare that to a whole bunch of other anonymous folks and create recommendations that can get spookily good.   It’s done by massive analytics, there’s no homunculus hiding behind the screen cobbling these recommendations together, it’s all done by rules and statistics.

I’ve presented before my interpretation of Web 3.0, and it is very much about using smart internet services to do, essentially system-generated content (as opposed to 1.0 producer-generated content and 2.0 user-generated content).   The application of context to content could be a bit ambiguous, however, and I’d mean that to be dynamic application of context to content, rather than pre-designed solutions (which get back to web 1.0).

As such, their first component of their three parts includes the semantic web.   Which, if they’d stopped at, would be fine. However, they bring in two other components. The second:

  • the Mobile Web, which will allow users to experience the web seamlessly as they move from one device to another, and most interaction will take place on mobile devices.

I don’t see how this follows from the definition. The mobile web is really not fundamentally a shift.   Mobile may be a fundamental societal shift, but just being able to access the internet from anywhere isn’t really a paradigmatic shift from webs 1.0 and 2.0. Yes, you can acccess produced content, and user-generated content from wherever/whenever, but it’s not going to change the content you see in any meaningful way.

They go on to the third component:

  • The third element is the idea of an immersive Internet, in which virtual worlds, augmented reality, and 3-D environments are the norm.

Again, I don’t see how this follows from their definition.   Virtual worlds start out as producer-generated content, web 1.0. Sims and games are designed and built a priori.   Yes, it’s way cool, technically sophisticated, etc, but it’s not a meaningful change. And, yes, worlds like Second Life let you extend it, turning it into web 2.0, but it’s still not fundamentally new.   We took simulations and games out of advanced technology for the conferences several years ago when I served.   This isn’t fundamentally new.

Yes, you can do new stuff on top of mobile web and immersive environments that would qualify, like taking your location and, say, goals and programmatically generating specific content for you, or creating a custom world and outcomes based upon your actions in the world from a model not just of the world, but of you, and others, and… whatever.   But without that, it’s just web 1.0 or 2.0.

And it’d be easy to slough this off and say it doesn’t matter, but ASTD is a voice with a long reach, and we really do need to hold them to a high standard because of their influence.   And we need people to be clear about what’s clever and what’s transformative.   This is not to say my definition is the only one, others have   interpretations that differ, but I think the convergent view is while it may be more than semantic web, it’s not evolutionary steps.   I’m willing to be wrong, so if you disagree, let me know.   But I think we have to get this right.

Jane Hart’s Social Learning Handbook

24 February 2011 by Clark 1 Comment

Having previously reviewed Marcia Conner and Tony Bingham’s The New Social Learning, and Jane Bozarth’s Social Media for Trainers, I have now received my copy of Jane Hart’s Social Learning Handbook.   First, I’ll review Jane’s book on it’s own, and then put it in the context of the other two.   Caveat: I’m mentioned in all three, for sins in my past, so take the suitable precautions.

Jane’s book is very much about making the case for social learning in the workplace, as the first section details.   This is largely as an adjunct to formal learning, rather than focusing on social media for formal learning. Peppered with charts, diagrams, bullet lists, and case studies, this book is really helpful in making sense of the different ways to look at learning.

The first half of the book is aimed at helping folks get their minds around social media, with the arguments, examples, and implementation hints.   While her overarching model does include formal structured learning (FSL), it also covers her other components that complement FSL: accidental and serendipitous learning (ASL), personally directed learning (PSL), group-directed learning (GDL), and intraorganizational learning (IOL).   The point, as she shares Harold Jarche’s viewpoint on, is that we need to support not just dependent learning, but independent and interdependent learning.   And she’s focused on helping you succeed, with lots of practical advice about problems you might face and steps that might help.

Jane has a unique and valuable talent for looking at things and sorting them out in sensible ways, and that is put to great use here.   Nearly the last half of the book is 30 ways to use social media to work and learn smarter, where she goes through tools, hints and tips on getting started, and more.   Here, her elearning tool of the day site has yielded rich benefits for the reader, because she’s up to date on what’s out there, and has lists of sites, tools, people with helpful comments.

This is the book for the learning and development group that wants to figure out how to really support the full spectrum of performers, not just the novices, and/or who want to quit subjecting everyone to a course when other tools may make sense.

So, how does this book fit with Jane Bozarth’s Social Media for Trainers, and Conner & Bingham’s The New Social Learning?   Jane B’s book is largely for trainers adding social media to supplement formal learning, where as Jane H’s book is for those looking to augment formal learning, so they’re complementary.   Marcia and Tony’s book is really more the higher level picture and as such is more useful to the manager and executive.   Roughly, I’d sell the benefits to the organization with Marcia & Tony’s book, I’d give Jane B’s book to the trainers and instructional designers who are charged with improving on formal learning, and I’d give Jane H’s book to the L&D group overall who are looking to deliver more value to the organization.

They’re all short, paperback, quick and easy reading, and frankly, I reckon you oughta pick all three of them up so you don’t miss a thing.   You’d be hard pressed to get a better introduction and roadmap than from this trio of books.   Let’s tap into this huge opportunity to make things go better and faster.

Quip: limits

21 February 2011 by Clark Leave a Comment

The limits are no longer the technology; the limits are between our ears (ok, and our pocketbooks).

My old surfing buddy Carl Kuck used to say that the only limits are between our ears, and I’ve purloined his phrase for my nefarious purposes.   This comes from the observation that Arthur C. Clarke made that “any truly advanced technology is indistinguishable from magic“.   I want to suggest that we now have magic: we can summon up demons (ok, agents) to do our bidding, and peer across distances with crystal balls (or web cams). We really can bring anything, anywhere, anytime. If we can imagine it, we can make it happen if we can marshal the vision and the resources. The question is, what do we want to do with it?

Really, what we do in most schooling is contrary to what leads to real learning. I believe that technology has given us a chance to go back to real learning and ask “what should we be doing?”.   We look at apprenticeship, and meaningful activity, and scaffolding, and realize that we need to find ways to achieve this.   (Then we look at most schooling and recoil in horror.)

So, let’s stop letting the ways in which our cognitive architecture limits us (set effects, functional fixedness, premature evaluation) and think broadly about what we could be doing, and then figure out how to make it so. I’ll suggest that some components are slow learning, distributed cognition, social interaction, and meta-learning (aka 21st Century skills).   What do you think might be in the picture?

Learning Technologies UK wrap-up

31 January 2011 by Clark 4 Comments

I had the pleasure of speaking at the Learning Technologies ’11 conference, talking on the topic of games.   I’ve already covered Roger Schank‘s keynote, but I want to pick up on a couple of other things. Overall, however, the conference was a success: good thinking (more below), good people, and well organized.

The conference was held on the 3rd floor of the conference hall, while floors 1 and ground hosted the exposition: the ground floor hosted the learning and skills (think: training) exhibits while the 1st floor held learning technology (read: elearning) vendors.   I have to admit I was surprised (not unpleasantly) that things like the reception weren’t held in the exhibit halls.   The conference was also split between learning technologies (Day 1) and learning and skills (day 2), so I have to admit being somewhat surprised that there weren’t receptions on the respective floors, to support the vendors, tho’ having a chance to chat easily with colleagues in a more concise environment was also nice.

I’m not the only one who commented on the difference between the floors: Steve Wheeler wrote a whole post about it, noting that the future was above, and the past showing below.   At a post-conference review session, everyone commented on how the level of discussion was more advanced than expected (and gave me some ideas of what I’d love to cover if I got the chance again).   I’d   heard that Donald Taylor runs a nice conference, and was pleased to see that it more than lived up to the billing.   There was also a very interesting crowd of people I was glad to meet or see again.

In addition to Roger’s great talk on what makes learning work, there were other stellar sessions. The afore-mentioned Steve did a advanced presentation on the future of technologies that kept me engaged despite a severe bout of jetlag, talking about things you’ve also heard here: semantics, social, and more.   He has a web x.0 model that I want to hear more about, because I wasn’t sure I bought the premise, but I like his thinking very much. There was also a nice session on mobile, with some principles presented and then an interesting case study using iPads under somewhat severe(military) constraints on security.

It was hard to see everything I wanted to, with four tracks. To see Steve, I had to pass up Cathy Moore, who’s work I’ve admired, though it was a pleasure to meet her for sure.   I got to see Jane Bozarth, but at the expense of missing my colleague Charles Jennings.   I got to support our associate Paul Simbeck-Hampson, but at the cost of missing David Mallon talk on learning culture, and so on.

A great selection of talks to hear is better than not. There was also a very interesting crowd of people I was glad to meet or see again.   A great experience, overall, and I can happily recommend the conference.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok