Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Tradeoffs in aesthetics

28 March 2023 by Clark Leave a Comment

For the LDA debate this month, Ruth Clark talked to Matt Richter and I about aesthetics in learning. Ruth, you should know, is the co-author of eLearning and the Science of Instruction, amongst other books, a must-have which leverages Rich Mayer’s work on multimedia learning. Thus, she’s knowledgeable about what the research says. What emerged in the conversation was a problem about tradeoffs in aesthetics, that’s worth exploring.

So, for one thing, we know that gratuitous media interferes with learning. From John Sweller’s work on cognitive load theory, we know that processing the unnecessary data reduces cognitive resources available to support learning. There’s usually enough load just with the learning materials. Unless the material materially supports learning, it should be avoided.

On the other hand, we also know that we should contextualize learning. The late John Branford’s work with the Cognitive Technology Group while at Vanderbilt, for instance, demonstrated this. As the late David Jonassen also demonstrated with his problem-based learning, we retain and transfer better with concrete problems. Thus, creating a concrete setting for applying the knowledge is of benefit to learning.

What this sets up, of course, is a tradeoff. That is, we want to use aesthetics to help communicate the context, but we want to keep them minimal. How do we do this? Even text (which is a medium), can be extraneous. There really is only one true response. We have to create our first best guess, and then we test. The testing doesn’t have to be to the level of scientific rigor, mind you. Even if it just passes the scrutiny of fellow team members, it can be the right choice, though ideally we run it by learners.

What we have to fight is those who want to tart it up. There will be folks who want more aesthetics. We have to push back against that, particularly if we think it interferes with learning. We need to ensure that what’re producing doesn’t violate what’s known. It’s not always easy, and in situations we may not always win, but we have to be willing to give it a go.

There are tradeoffs in aesthetics, so we have to know what matters. Ultimately, it’s about the learning outcomes. Thus, focusing on the minimum contextualization, and the maximum learning, is likely to get us to a good first draft. Then, let’s see if we can’t check. Right?

Time is the biggest problem?

21 March 2023 by Clark 1 Comment

In conversations, I’ve begun to suspect that one of the biggest, if not the biggest, problem facing designers wishing to do truly good, deep, design, is client expectations. That is, a belief that if we’re provided with the appropriate information, we can crank out a solution. Why, don’t you just distribute the information across the screen and add a quiz? While there are myriad problems, such as lack of knowledge of how learning works, etc, folks seem to think you can turn around a course in two weeks. Thus, I’m led to ponder if time is the biggest problem.

In the early days of educational technology, it was considered technically difficult. Thus, teams worked on instantiations: instructional designers, media experts, technologists. Moreover, they tested, refined, and retested. Over time, the tools got better. You still had teams, but things could go faster. You could create a draft solution pretty quickly, with rapid tools. However, when people saw the solutions, they were satisfied. It looks like content and quizzes, which is what school is, and that’s learning, right? Without understanding the nuances, it’s hard to tell well-produced learning from well-designed and well-produced learning. Iteration and testing fell away.

Now, folks believe that with a rapid tool and content, you can churn out learning by turning the handle. Put content into the hopper, and out comes courses. This was desirable from a cost-efficiency standpoint. This gets worse when we fail to measure impact. If we’re just asking people whether they like it, we don’t really know if it’s working. There’s no basis to iterate! (BTW, the correlation for learner assessment of learning quality, and the actual quality, is essentially zero.)

For the record, an information dump and knowledge test is highly unlikely to lead to any significant change in behavior (which is what learning we are trying to accomplish). We need is meaningful practice, and to get that right requires a first draft, and fine tuning. We know this, and yet we struggle to find time and resources to do it, because of expectations.

These expectations of speed, and unrealistic beliefs in quality, create a barrier to actually achieving meaningful outcomes. If folks aren’t willing to pay for the time and effort to do it right, and they’re not looking at outcomes, they will continue to believe that what they’re spending isn’t a waste.

I’ve argued before that what might make the biggest impact is measurement. That is, we should be looking to address some measurable problem in the org and not stop until we have addressed it. With that, it becomes easier to show that the quick solutions aren’t having the needed impact. We need evidence to support making the change, but I reckon we also need to raise awareness. If we want to change perception, and the situation, we need to ensure that others know time is the biggest problem. Do you agree?

Process and Product

14 March 2023 by Clark Leave a Comment

Of late, I’ve been talking a fair bit about my take on learning experience design (LXD). To me, it’s the elegant integration of learning science with engagement. Of course, I’m biased, as my two most recent books are specifically to those ends! I don’t claim it’s automatic, but I do believe that with practice, it gets easier. You need to address both process and product, of course.

Our goals are, ultimately, to achieve learning outcomes, typically retention and transfer. That is, retaining skills over time ’til needed and transferring to all appropriate (and no inappropriate) situations. This requires, cognitively, sufficient practice and an appropriate spread of contexts. Emotionally, it requires an initial hook and then maintaining commitment through the experience via relevant activities.

I’ve been running a workshop with my partner, Upside Learning, on the ‘missing bits’. That is, the fine tuning that takes what you normally do in ID and fills in the extra steps that will successfully provide the integration. It’s been great for stress-testing the workshop (stay tuned!), and extremely insightful. I get to hear what these smart and experienced folks are realizing in their own practices, and what they’re struggling to change.  That’s my goal, of course: to help them bake learning science and engagement into their processes and products.

One of the concerns, not surprisingly, is that it takes more time. That includes upfront analysis (which clients can also resist). Then it requires a bit more thought on designing practice and winnowing content. Finally, it should be iterative. I’m not the only one focusing on the latter, of course. However, I argue that it ultimately really doesn’t take that much more time, but there will be a speed bump until the new way of thinking becomes automatic.

Still, it will require adjusting how we develop, to impact what we develop. Process and product are linked at the wrist and ankles. Understanding the underlying principles, the learning science and engagement integration, is a necessary foundation. That’s my take, I welcome yours.

I’ll be offering a free webinar with Training Magazine Network on the core principles of LXD on Wednesday March 22 at 9AM PT (noon ET). I note that if there’s a conflict, they’ll make the recording available afterwards if you register. I welcome seeing you there!

What to do?

7 March 2023 by Clark 2 Comments

Let me suggest that one of the biggest gaps in our thinking is about doing. Too often, we think about ‘learning’ as the end goal, and it’s not. (I’ve gone as far as suggesting we rename L&D!) We need to rethink and ask about doing, not learning; we need to ask about people: what to do?

To start with, for organizational needs we don’t learn for intellectual self-gratification. (Though, too often, it seems that way: ‘awareness’ courses continue to perplex me. What possible organizational value will they achieve?) Instead, there should be identified gaps that are targeted because remedying them will improve outcomes.

There’s a whole process of analysis that starts with looking at gaps between ideal and real performance. Where are we lacking? Then, for any particular gap, we look for the root cause: is it a lack of skill, lack of knowledge, lack of resources, lack of motivation, … ? This up front work keeps us from using training to address a misalignment between incentives and desired behavior, for instance. Training isn’t going to keep people from doing things that are in their best interest! (And rightly so.)

Then, when we identify the root cause, we can target the appropriate intervention. Not all interventions may be within L&D’s purview, of course. We design courses. We could also be the ones who design performance support and facilitate informal learning (who better?). Of course, we shouldn’t be responsible for hiring or resourcing or compensation; at least not without a job description and skilling rethink. Our organizations deserve to invest in things that will move important needles.

This all is a shift to a focus on performance, on doing, not learning. While there are a variety of terms, this, to me, falls under the label ‘performance consulting’. It starts by asking “what should people be doing?” With the caveat that they aren’t doing now, or are doing wrongly. Then we ask “why?” Finally, we’re ready to design a solution. We’re focusing on outcomes. If it’s skills, it has to be in the head; and learning’s involved. If it’s knowledge, if it has to be in the head, learning can be involved, otherwise we should put it in the world. And so on.

My intent here is to suggest focusing on performance, not learning or knowing. That makes a better focus for investment, and is easier to recognize when it’s been remedied. So, what to do? Focus on performance first. Determine if a learning solution is your best choice, before you invest in it. Otherwise, you could be throwing money away. If you’ve got money to throw away, I can help ;), but I’d rather help you use it wisely.

The Learning Development Accelerator is running a mini-conference on performance consulting. It’s four half-days of immersion in the topics, with some of the top folks in the field. All with the usual focus on evidence-based practices. If you want to start doing L&D right, it’s a good start! 

Misconceptions?

28 February 2023 by Clark 5 Comments

Several books ago, I was asked to to talk about myths in our industry. I ended up addressing myths, superstitions, and misconceptions. While the myths persist, the misconceptions propagate, aided by marketing hype. They may not be as damaging, but they also are a money-sink, and contribute to the lack of our industry making progress. How do we address them?

The distinctions I make for the 3 categories are, I think, pretty clear. Myths are beliefs that folks will willingly proclaim, but are contrary to research. This includes learning styles, attention span of a goldfish, millennials/generations, and more (references in this PDF, if you care). Superstitions are beliefs that don’t get explicit support, but manifest in the work we do. For example, that new information will lead to behavior change. We may not even be aware of the problems with these! The last category is misconceptions. They’re nuanced, and there are times when they make sense, and times they don’t.

The problem with the latter category is that folks will eagerly adopt, or avoid, these topics without understanding the nuances. They may miss opportunities to leverage the benefits, or perhaps more worrying, they’ll spend on an incompletely-understood premise. In the book, I covered 16 of them:

70:20:10
Microlearning
Problem-Based Learning
7 – 38 – 55
Kirkpatrick
NeuroX/BrainX
Social Learning
UnLearning
Brainstorming
Gamification
Meta-Learning
Humor in Learning
mLearning
The Experience API
Bloom’s Taxonomy
Learning Management Systems

On reflection, I might move ‘unlearning’ to myths, but I’d certainly add to this list. Concepts like immersive learning, workflow learning, and Learning Experience Platforms (LXPs)  are some that are touted without clarity. As a consequence, people can be spending money without necessarily achieving any real outputs. To be clear, there are real value in these concepts, just not in all conceptions thereof. The labels themselves can be misleading!

In several of my roles, I’m working to address these, but the open question is “how?” How can we illuminate the necessary understanding in ways that penetrate the hype? I truly do not know. I’ve written here and spoken and written elsewhere on previous concepts, to little impact (microlearning continues to be touted without clarity, for instance). At this point, I’m open to suggestions. Perhaps, like with myths, it’s just persistent messaging and ongoing education. However, not being known for my patience (a flaw in my character ;), I’d welcome any other ideas!

Thinking artificially

21 February 2023 by Clark Leave a Comment

I finally put my mitts on ChatGPT. The recent revelations, concern, and general plethora of blather about it made me think I should at least take it for a spin around the block. Not surprisingly, it disappointed. Still, it got me thinking about thinking artificially. It also led me to a personal commitment.

What we’re seeing is a two-fold architecture. On one side is a communication engine, e.g. ChatGPT. It’s been trained to be able to frame, and reframe, text communication. On the other side, however, must be a knowledge engine, e.g. something to talk about. The current instantiation used the internet. That’s the current problem!

So, when I asked about myself, the AI accurately posited two of my books. It also posited one that as far as I know, doesn’t exist! Such results are not unknown. For instance, owing to the prevalence of the learning styles myth (despite the research), the AI can write about L&D and mention styles as a necessary consideration. Tsk!

The problem’s compounded by the fact that many potential knowledge bases, beyond the internet, have legacy problems. Bias has been a problem in human interactions, and records thereof can also therefore have bias. As I (with co-author Markus Bernhardt) have opined, there is a role for AI in L&D, but a primary one is ensuring that there’s good content for an AI engine to operate on. Another, I argue, is to create the meaningful practice that AI currently can’t, and is likely true for the foreseeable future. I also have yet to see an AI that can create a diagram (tho’ that, to me, isn’t as far-fetched, depending on the input).

I have heard from colleagues who find the existing ChatGPT very valuable. However, they don’t take what it says as gospel, instead they use it as a thinking partner. That is, they’ll prompt it with thoughts they’re having to see what comes up. The goal is to get some lateral input to consider (not take as gospel). It’s a way to consider ideas they may have missed or not seen, which is a valuable role.

At this point, I may or may not use AI in this way, as a thinking (artificially) partner. I’ll have to experiment. One thing I can confidently assert is that everything you read (e.g. here) that is truly from me (i.e. there’s the possibility I will be faked ) will be truly from me. I’m immodest enough to think that my writing is not in need of artificial enhancement. I may be wrong, but that’s OK with me. I hope it is with you, too!

A step backward?

14 February 2023 by Clark Leave a Comment

In working with colleagues about redesigning design (our goal is better incorporating learning science into practices), I had a realization. I frequently see in practice, and it’s pretty much the orientation of the tools, that we work forwards. That is, we start at the beginning, work our way forward through content and practice, and end at, well, the end. While this may make sense from a workflow perspective, there’s a fundamental flaw. So I think it’s time we take a step backward.

Once we’ve done the analysis, and put our goal in mind, it can seem reasonable to move forward, through the various steps. It’s one way to create a coherent experience. However, there’s a flaw with this. For one, it takes our eye off the ball. That is, what’s core is what our performers come out able to do. Getting lost in the flow of experience may lead us astray. For another, it’s assuming we’ll get it right the first time. That’s a mistake.

You can start at either of two places to see an alternative. For one, as McTighe and Wiggins have advocated in Understanding by Design, they focus on the outcomes first and work backwards. For another, modern successors to older design practices – Michael Allen’s Successive Approximations Model (SAM), Megan Torrance’s Lot Like Agile Management Approach (LLAMA), Cathy Moore’s Action Mapping, and David Merrill’s Pebble in the Pond, for some prominent examples – all start designing from the practice first. They iterate on the practice with testing, while working backwards to necessary prerequisite problems, until you get to where your audience starts.

This is both pragmatic and principled. On principle, you work backwards from the core problem. This keeps you aligned with the outcome. Then you supplement with the minimal material to help performers succeed. This includes examples, models, etc. Then, like with a proper paper, you write the introduction and closing last. Pragmatically, this keeps the focus on the critical parts, and ensures you’re focusing your valuable time honing the most important elements first.

It’s easy (trust me, I fall prey to this too) to work forward. Still, it’s smarter to take a step backward and work that way. If you want an impact. Which, I suspect, you do. Or you should, eh?

It’s complex

7 February 2023 by Clark Leave a Comment

In a recent conversation, I was talking about good design. Someone asked a question, and I elaborated that there was more to consider. Pressed again, I expanded yet more. I realized that when talking good learning design, it’s complex. However, knowing how it’s complex is a first step. Also, there are good guidelines. Still, we will have to test.

I’m not alone in suggesting that, arguably, the most complex thing in the known universe is the human brain. I jokingly ask whether bullet points are going to lead to sustained changes in behavior in such a complex organism? Yet, I also tout learning science design principles that help us. Is there a resolution?

The complexity comes from a number of different issues. For one, the type, quantity, challenge, and timing of practice depends on multiple factors. Things that can play a role include how complex the task is, how frequently it’s performed, and how important the consequences are. Similarly, the nature of the topic, whether it’s evolutionarily primary or secondary can also have an influence. The audience, of course, makes a difference, as does the context of practice. Addressing the ‘conative’ element – motivation, anxiety, confidence – also require some consideration.That’s a lot of factors!

Yet, we know what makes good practice, and we can make initial estimates of how much we need. Likewise, we can choose a suite of contexts to be covered to support appropriate transfer. We have processes as well as principles to assist us in making an initial design.

Importantly, we should not assume that the first design is sufficient. We do, unfortunately, and wrongly. Owing to the complexity of items identified previously, even with great principles and practices, we should expect that we’ll need to tune the experience. We need to prototype, test, and refine. We also need to build that testing into our timelines and budgets.

There is good guidance about testing, as well. We know we should focus on practice first, using the lowest technology possible. We should test early and often. Just as we have design guidance, these are practices that we know assist in iterating to a sufficient solution. Similarly, we know enough that it shouldn’t take much tuning since we should be starting from a good basis.

Using the cognitive and learning sciences, we have good bases to start from on the way to successful performance interventions. We have practices that address our limitations as designers, and the necessities for tuning. We do have to put these in practice in our planning, resourcing, and executing. Yet we can create successful initiatives reliably and repeatedly if we follow what’s known, including tuning. It’s complex, but it’s doable. That’s the knowledge we need to acknowledge, and ensure we possess and apply.

Vale Roger Schank

3 February 2023 by Clark 4 Comments

I’d first heard of Roger Schank’s work as an AI ‘groupie’ during my college years. His contributions to cognitive science have been immense. He was a challenging personality and intellect, and yet he fought for the right things. He passed away yesterday, and he will be missed.

Roger’s work connected story to cognition. He first saw how we had expectations about events owing to his experience at a restaurant with an unusual approach. At Legal Seafoods (at the time) you paid before being served (more like fast food than a sit-down venue). Surprised, Roger realized that there must be cognitive structures for events that were similar to the proposed schemas for things. He investigated the phenomena computationally, advancing artificial intelligence and cognitive science. Roger subsequently applied his thinking to education, writing Engines for Education (amongst other works), while leading a variety of efforts in using technology to support learning. He also railed against AI hype, accurately of course. I was a fan.

I heard Roger speak at a Cog Sci conference I attended to present part of my dissertation research. The controversy around his presentation caused the guest speaker, Stephen Jay Gould, to comment “you guys are weird”! His reputation preceded him; I had one of his PhD graduates on a team and he told me Roger was deliberately tough on them, saying “if you can survive me, you can survive anyone”.

I subsequently met up with Roger at several EdTech events hither and yon. In each he was his fiery, uncompromising self. Yet, he was also right. He was a bit of a contradiction: opinionated and unabashed, but also generous and committed to meaningful change. He also was a prodigious intellect; if you were as smart as him, I guess you had a reason to be self-confident. I got to know him a bit personally at those events, and then when he engaged me for advice to his company. He occasionally would reach out for advice, and always offer the same.

He could be irritating in his deliberate lack of social graces, but he was willing to learn, and had a good heart. In return, I learned a lot from him, and use some of his examples in my presentations. It was an honor to have known him, and the world will be a little duller, and probably a little dumber, without him. Rest in peace.

Coping with information

2 February 2023 by Clark Leave a Comment

I just finished reading Ross Dawson’s Thriving on Overload, and it’s a worthy read. The subtitle basically explains it: The 5 powers for success in a world of exponential information. The book has balance between principle and practice, with clear and cogent explanations. It’s not the only model for information management given the increasing challenge, but it’s a worthwhile read if you’re looking for help in coping with information deluge.

I’d heard Ross speak at an event, courtesy of my late friend Jay Cross. Ross is renown as a futurist, perceiving trends ahead of most folks. An Aussie (my 2nd home ;), I can’t say I really know him, but he has a well-established reputation, and keynotes around the world. He was perfectly coherent then and is again here.

Dawson frames elements in terms of how our brain works, which makes sense. He suggests: having an initial purpose, understanding the connections, filtering what’s coming in, paying attention to what’s important, and synthesizing what’s seen. Then, of course, it’s integrating them into a collective whole. He tosses in many interesting and useful observations along the way.

I’ve been, and remain, a fan of Harold Jarche’s Personal Knowledge Management (PKM). His framework is fairly simple – seek, sense, share – though the nuances make it powerful. He receives a mention, but I see some synergies. Harold takes the ‘purpose’ as implicit, and I see Dawson’s framing and synthesizing as both parts of Jarche’s ‘sense’. Similarly, I see Dawson’s attention and filtering as equivalent to Jarche’s ‘seek’. Where they differ most is, to me, where Jarche asks you to share out your learning, and Dawson’s is more personal.

Dawson’s steps are coherent, individually and collectively. As a fan of diagramming, I liked his focus on framing. He grounds much of his arguments in the natural ways our brains work, which I also am a fan of. I will quibble slightly at the end, where he says our brains are evolving to meet this new demand. If we use a metaphor between hardware and software, I’d agree that our brains adapt, but that’s not unique to information overload. What isn’t happening is our brain’s architecture changing. I think his claim maybe slightly misleading in that sense. A small quibble with a generally very good book.

Overall, I think the practices Dawson recommends are valuable and sound. In this era of increasing information, having practices that assist are critical. You can take Harold’s workshop, or read Ross’s book; both will give you useful skills. What you shouldn’t do is continue on without some systematic practices. If you’re looking for help coping with information, it’s available. Recommended.

 

Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

Blogroll

  • Charles Jennings
  • Christy Tucker
  • Connie Malamed
  • Dave's Whiteboard
  • Donald Clark's Plan B
  • Donald Taylor
  • Harold Jarche
  • Julie Dirksen
  • Kevin Thorn
  • Mark Britz
  • Mirjam Neelen & Paul Kirschner
  • Stephen Downes' Half an Hour

License

Previous Posts

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.