Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Aligning and enabling transformation

2 November 2021 by Clark Leave a Comment

In what was my last Quinnsights column for Learning Solutions, I wrote about how the transformation wasn’t (or shouldn’t) be digital. In many ways we aren’t aligned with what’s best for our thinking. Thus,  digitizing existing approaches doesn’t make sense. Instead, we should be fixing our organizational alignment first,  then  digitizing. The opportunity is in aligning  and enabling transformation.

First, we should be looking at  all  the levels of organizational alignment. At the individual level we can be doing things like implementing federated search, to support individual learning. This should be coupled with providing development of writing good search strings and evaluating search outcomes. This also means curating a suite of resources aligned with learning directions and future opportunities. The point being that we should be supporting evidence-based methods for individual development, then supporting digitally. For instance, supporting learning-to-learn skills. Taking them for granted is a mistake! It’s also about ongoing support for development, e.g. coaching. Good practices help, and tools that document approaches and outcomes can assist.

At the group level, there are again ways in which we can be fostering effectiveness. This includes having good collaboration tools, and assisting people in using them well. It can also be about policies that make ‘show your work’ safe. Then you can augment with ‘show your work’ tool. Again, having the right practices and policies makes the digital transformation investment more valuable. You could pick the wrong tools if you’re instituting the old ways instead of doing the process work first.

This holds true at the organizational level as well, of course. The policies and practices cross the organization. Thus, what works for teams comes from an organizational focus on learning. Then, the digital investments are focused on the most optimal outcomes. The alternative, digitizing unaligned practices, can only hinder improvement to be a successful organization.

There are a lot of myths about what works. This includes learning myths, but also bad HR practices. Many stem from maintaining approaches that are carryovers from industrial age business. Instead, we should be leveraging our knowledge of thinking to be strategic. L&D can be critically contributing to organizational success! Or not. There’s a big opportunity to shift practices in a positive direction, with upsides for outcomes. However, it takes the understanding and the will. What will you do?

This is related to the talk I’ll be giving as the opening keynote for the ATD Japan Summit in December (though I’m filming it for virtual delivery). I get my thinking done here first ;).  

Tech Thoughts

28 October 2021 by Clark Leave a Comment

I’m late with a post this week, owing to several factors, all relating to technology. I hadn’t quite pulled together a complete lesson, but by writing down these tech thoughts, I got there. (A further argument for the benefits of reflection.)

It started with upgrading our phones. We needed to (I tend to hand mine down, but really we both needed an upgrade this time). Of course there are hiccups, particularly since m’lady’s was so old that it couldn’t do the amazing thing mine had done. What happened with mine was that you just put the old phone and the new phone together and the new one just sucks the data off the old one and then the old one asks if you want to wipe it clean!  That’s some serious user experience. Something we should look more to in our LXD, so that we’re doing proper backwards design and we have the right combination of tools and learning to make performance relatively effortless.

Then another thing was quite interesting. An individual linked to me on the basis of a citation in a book. I didn’t know the book, so he sent me a picture of the page. He also asked if I could read Dutch. Sadly, no. However, I had recently upgraded my OS, and when I opened the picture, I noticed I could click on the text. Of. The. Picture!  I could select all the text (my OS was doing OCR on the picture live!), and then I could paste into Google Translate (another amazing feat) and it recognized it as Dutch and translated it into English. Whoa!

On the flip side, owing to the unusually heavy rain (for California), first our internet went out, and then the power. Fortunately both were working by the next morning. However, after that my backup drives kept dismounting and I couldn’t execute a backup reliably. I thought it might be the laptop, and I did a couple of increasingly difficult remedial measures. Nope. Had the drives been damaged by the power outage? Power outages aren’t quite new around here (we’re a bit up a hillside, and squirrels regularly blow the transformer), yet it hadn’t happened before.

Then I was on a Zoom call, and I started having hiccups in the microphone and camera. Even typing. What? When I switched to the laptop camera, it was all good.  All the things, drives, microphone, external monitor, are connected by a external hub. The hub had gone wonky! Instead of having to replace drives, I picked up a new hub last nite, and all’s good now. Phew!

I guess my take home tech thoughts is that we’re making true a promise I’ve mentioned when talking mobile: we really do have magic. (Asimov famously said any truly advanced technology is indistinguishable from magic.) We can do magical things like talk at distance, have demons do tasks on our behalf (picture text transcribing and translation), etc. On the other hand, when it doesn’t work, it can be hard to identify the problem!  Overall, it’s a win. Well, when it’s designed right! Which involves testing and tuning. As Dion Hinchcliffe put it: “Seamless #cx is now table stakes.” So let’s get designing, testing, and tuning, and make magical experiences.

More lessons from bad design

24 August 2021 by Clark 2 Comments

I probably seem like a crank, given the way I take things apart. Yet, I maintain there’s a reason beyond “get off my lawn!” I point out flaws not to complain, but instead to point to how to do it better. (At least, that’s my story and I’m sticking to it. ;) Here’s another example, providing more lessons from bad design.

In this case, I’ll be attending a conference and the providers have developed an application to support attendees. In general, I look forward to these applications. They provide ways to see who’s attending, and peruse sessions to set your calendar. There are also ways to connect to people. However, two major flaws undermine this particular instance.

The first issue is speed. This application is  slow! I timed it; 4 seconds to open the list of speakers or attendees. Similarly, I clicked on a letter to jump through the list of attendees. The amount of time it takes varied from 4 to 8 seconds. Jumping to the program took 6 seconds.

While that may seem short, compare that to most response times in apps. You essentially can’t time them, they’re so fast. More than a second is an era in mobile responsiveness. I suspect that this app is written as a ‘wrapped’ website, not a dedicated app. Which works sometimes, but not when the database is too big to be responsive. Or it could just be bad coding. Regardless, this is  basically unusable. So test the responsiveness before it’s distributed to make sure it’s acceptable. (And then reengineer it when it isn’t.)

That alone would be sufficient to discount this app, but there’s a second problem. Presumably for revenue reasons, there are ads that scroll across the top. Which might make sense to keep the costs of the app down, but there’s a fundamental problem with our visual architecture.

Motion in the periphery of our vision is distracting. That was evolutionarily adaptive, allowing us to detect threats from places that we weren’t focusing on. Yet, when it’s not a threat, and we  are trying to focus on something, it interferes. We learned about this in the days of web pages with animated gifs: you couldn’t process what you were there to consume!

In this app, the scrolling of the ads makes it more difficult to read the schedule, attendee lists, and other information. Thus, the whole purpose of the application is undermined. You could have static ads that are randomly attached to the pages you click on. The audience is likely to go to several pages, so all the ads will get seen. Having them move, however, to ensure that you see them all undermines the whole purpose of the app.

Oddly enough, there are other usability problems here. On the schedule, there’s a quick jump to times on a particular day. Though it stops at 2PM!?!? (The conference extends beyond that; my session’s at 4PM.) You’d think you could swipe to see later times on that ‘jump’ menu, but that doesn’t work. I can’t go farther, because the usability makes it too painful; we may miss more lessons from bad design.

Our cognitive architecture is powerful, but has limitations. Designing to work in alignment with our brains is a clear win; and this holds true for designing for learning as well as performance support. Heck, I’ve written a whole book  about how our minds work, just to support our ability to design better learning! Conflicting with our mental mechanisms is just bad design. My goal is that with more lessons in bad design, we can learn to do better. Here’s to good design!

Concept Maps and Learning

3 August 2021 by Clark 1 Comment

Once again, someone notified me of something they wanted me to look at. In this case, a suite of concept maps, with a claim that this could be the future of education. And while I’m a fan of concept maps, I was suspicious of the claim, So, while I’ve written on mindmaps before, it’s time to dig into concept maps and learning.

To start, the main separation between mindmaps and concept maps is labels. Specifically, concept maps have labels that indicate the meaning of  connections between concepts. At least, that’s my distinction. So while I’ve done (a lot of) mindmaps of keynotes, they’re mostly of use to those who also saw the same presentation. Otherwise, the terms and connections don’t necessarily make sense. (Which doesn’t mean a suite of connections can’t be valuable, c.f. Jerry’s Brain, where Jerry Michalski has been tracking his explorations for over two decades!) However, a concept map does a better job of indicating the total knowledge representation.

I know a wee bit about this, because while writing up my dissertation, I had a part-time job working with Professor  Kathy Fisher and SemNet. Kathy Fisher is a biologist and teacher who worked with Joe Novak (who can be considered the originator of concept mapping). SemNet is a Macintosh concept mapping tool (Semantic Network) that Kathy created and used in teaching biology. It allows students to represent their understanding, which instructors can use to diagnose misconceptions.

I also later volunteered for a while with the K-Web project. This was a project with James Burke (of Connections fame) creating maps of the interesting historical linkages his show and books documented. Here again, navigating linkages can be used for educational purposes.

With this background, I looked at this project. The underlying notion is to create a comprehensive suite of multimedia mindmaps of history and the humanities. This, to me, isn’t a bad thing! It provides a navigable knowledge resource that could be a valuable adjunct to teaching. Students can be given tasks to find the relationships between two things, or asked to extend the concept maps, or… Several things, however, are curious at least.

The project claims to be a key to the future of global education. However, as an educational innovation, the intended pedagogical design is worrisome. The approach claims that “They have complete freedom to focus on and develop whichever interests capture their fancy.” and “…the class is exposed to a large range of topics that together provide a comprehensive and lively view of the subject…”  This is problematic for two reasons. First, there appears to be no guarantee that this indeed will provide comprehensive coverage. It’s possible, but not likely.

As a personal example, when I was in high school, our school district decided that the American Civil War would be taught as modules. Teachers chose to offer whatever facets they wanted, and students could take any two modules they wanted. Let me assure you that my knowledge of the Civil War did not include a systematic view of the causes, occurrences, and outcomes, even in ideologically distorted versions. Anything I now know about the Civil War comes from my own curiosity.

Even with the social sharing, a valuable component, there appears to be no guidance to ensure that all topics are covered. Fun, yes. Curricularly thorough, no.

Second, presenting on content doesn’t necessarily mean you’ve truly comprehended it. As my late friend, historian Joseph Cotter, once told me, history isn’t about learning facts, it’s about learning to think like a historian. You may need the cultural literacy first, but then you need to be able to use those elements to make comparisons, criticisms, and more.  Students should be able to  think with these facts.

Another concerning issue in the presentation about this initiative is this claim: “reading long passages of text no longer works very well for the present generation of learners. More than ever, learners are visual learner [sic].” This confounds two myths, the digital native myth with the learning styles myth. Both have been investigated and found to be lacking in empirical support. No one likes to read long passages of text without some intrinsic interest (but we can do that).

In short, while I laud the collection, the surrounding discussion is flawed. Once again, there’s a lack of awareness of learning science being applied. While that’s understandable, it’s not sufficient.  My $0.05.

2021 Top 10 Tools for Learning

7 July 2021 by Clark Leave a Comment

As always, I like to participate in my Internet Time Alliance colleague Jane Hart’s Top 10 Tools for Learning survey. However, in reviewing last year’s list, things haven’t changed much. Still, it’s worth getting out there. So there’re my selections for 2021  top 10 tools for learning.

One of the major things I do is write: book, blog posts, articles, and more. So the first two tools I use are for writing:

1. Word. Yes, not totally pleased about the provider, but I have yet to find a tool with better industrial strength outlining. And, well, I’ve been using it since around 1989, so…there’s some familiarity…

2. WordPress. Of course, that’s how I’m writing here. I also use it for writing for the HPT Treasures blog (I post once a month, third Friday. Also I can occasionally use it for managing other sites (e.g. IBSTPI).

Another way I get my mind around new understandings is by representing information structure. So:

3. OmniGraffle. While this is Apple only, and dear, it so far is the best tool I’ve found to make diagrams. It’s got more capability than I need, but it also works the way I think, so…all told it’s still the winner.

4. OmniOutliner. Outlining is another way I think. While for writing it can be in Word, for other things: checklists, presentations, etc, it helps to have a dedicated tool. Again, Apple only, dear, and overkill, but their cheaper version doesn’t include columns, and that can be helpful!

I also do a fair bit of presentations: keynotes, webinars, and more. While I’m often forced to end up using Powerpoint…

5. Keynote. My native presentation tool.  (yes, I’m an Apple person, what can I say?). It’s just cleaner for me than alternatives.

From there, we get to social tools:

6. Twitter has been a long-standing tool. Tracking other folks, participating in dialogs, and even asking questions, Twitter’s an ongoing contributor to my learning.

7. LinkedIn is where I professionally socialize, and it’s becoming more prominent in my interactions. People reach out to me there, and I also track some folks, and there are occasionally interesting discussions.

8. Zoom has, well, zoomed up to the top of my interaction tool suite.  I’ve used it for chats, meetings, and webinars.

Then, of course, there’s searching for answers.

9. DuckDuckGo. I’ve switched to use this as my search engine, as it’s less-tracking, and provides good results.

10. Safari is still my browser of choice. I’ve experimented with Brave, but it hadn’t synched bookmarks across my devices. Now it does, but it’s hard to switch again.

So that’s my 2021 top 10 learning tools list. (Not really in any order, but I’ve numbered anyway. ;) It’s a personal list, since I’m not formally in education nor part of a workplace. I’ve been using Teams  more, but I still find it has silly limitations, so my preference is Slack.

 

 

A message to CxOs 2: about org learning myths

11 May 2021 by Clark 2 Comments

When I wrote my last post on a message to CxOs about L&D myths, I got some pushback. Which, for the record, is a good thing; one of us will learn something. As a counter to my claim that L&D often was it’s own worst enemy, there was a counter. The claim was that there are folks in L&D who get it, but fight upward against wrong beliefs. Which absolutely is true as well. So, let‘s also talk about what CxOs need to know about the org learning myths they may believe.  

First, however, I do want to say that there is evidence that L&D isn‘t doing as well as it could and should. This comes from a variety of sources. However, the question is where does the blame lie. My previous post talked about how L&D deludes itself, but there are reasons to also believe in unfair expectations. So here‘s the other side.  

  1. If it looks like schooling… I used this same one against L&D, but it‘s also the case that CxOs may believe this. Further, they could be happy if that‘s the case. Which would be a shame just as I pointed out in the other case. Lectures, information dump & knowledge test, in general content presentation doesn‘t lead to meaningful change in behavior in the absence of activity. Designed action and guided reflection, which looks a lot more like a lab or studio than a classroom, is what we want.
  2. SMEs know what needs to be learned. Research tells us to the contrary; experts don’t have conscious access to around 70% of what they  do (tho’ they do have access to what they know). Just accepting what a SME says and making content around that is likely to lead to a content dump and lack of behavior change. Instead, trust (and ensure) that your designers know more about learning than the SME, and have practices to help ameliorate the problem.
  3. The only thing that matters is keeping costs low.  This might seem to be the case, but it reflects a view that org learning is a necessary evil, not an investment. If we’re facing increasing change, as the pundits would have it, we need to adapt. That means reskilling. And effective reskilling isn’t about the cheapest approach, but the most effective for the money. Lots of things done in the name of learning (see above) are a waste of time and money. Look for impact first.
  4. Courses are the answer to performance issues.  I was regaled with a tale about how sales folks and execs were  insisting that customers wanted training. Without evaluating that claim. I’ll state a different claim: customers want solutions. If it’s persistent skills, yes, training’s the answer. However, a client found that customers were much happier with how-to videos than training for most of the situations. It’s a much more complex story.
  5. Learning stops at the classroom. As is this story. One of the reasons Charles Jennings was touting 70:20:10 was not because of the numbers, but because it was a way to get execs to realize that only the bare beginning came from courses, if at all. There’s ongoing coaching with stretch assignments and feedback, and interacting with other practitioners…don’t assume a course solves a problem. A colleague mentioned how her org realized that it couldn’t create a course without also creating manager training, otherwise they’d undermine the outcomes instead of reinforcing them.
  6. We‘ve invested in an LMS, that‘s all we need. That’s what the LMS vendors want you to believe ;)!  Seriously, if all you’re doing is courses, this could be true, but I’m hoping the above
  7. Customers want training.  Back to an earlier statement, customers want solutions. It is cool to go away to training and get smothered in good food and perks. However, it’s  also known that sometimes that  goes to the manager, not  the person who’ll actually be doing the work! Also, training can’t solve certain types of problems.  There are many types of problems customers encounter, and they have different types of solutions. Videos may be better for things that occur infrequently, onboard help or job aids may meet other needs to unusual to be able to predict for training, etc. We don’t want to make customers happy, we want  to make them successful!
  8. We need ways to categorize people. It’s a natural human thing to categorize, including people. So if someone creates an appealing categorization that promises utility, hey that sounds like a good investment. Except, there are many problems! People aren’t easy to categorize, instruments struggle to be reliable, and vested interests will prey upon the unwary.  Anyone can create a categorization scheme, but validating it, and having it be useful, are both surprisingly big hurdles. Asking people questions about their behavior tends to be flawed for complex reasons. Using such tools for important decisions like hiring and tracking have proven to be unethical. Caveat emptor.
  9. Bandwagons are made to be jumped on. Face it, we’re always looking for new and better solutions. When someone links some new research to a better outcome, it’s exciting. There’s a problem, however. We often fall prey to arguments that appear to be new, but really aren’t. For instance, all the ‘neuro’ stuff unpacks to some pretty ordinary predictions we’ve had for yonks. Further, there are real benefits to machine learning and even artificial intelligence. Yet there’s also a lot of smoke to complement the sizzle. Don’t get misled. Do a skeptical analysis.  This holds doubly true for technology objects. It’s like a cargo cult, what’s has come down the pike must be a new gift from those magic technologists! Yet, this is really just another bandwagon. Sure, Augmented Reality and Virtual Reality have some real potential. They’re also being way overused. This is predictable, c.f. Powerpoint presentations in Second Life, but ideally is avoided. Instead, find the key affordances – what the technology uniquely provides – and match the capability to the need. Again, be skeptical.

My point here is that there can be misconceptions about learning  within  L&D, but it can also be outside perspectives that are flawed. So hopefully, I’ve now addressed both. I don’t claim that this is a necessary and complete set, just certain things that are worth noting. These are org learning myths that are worth trying to overcome, or so I think. I welcome your thoughts!

A message to CxOs: about L&D myths

4 May 2021 by Clark 3 Comments

If you’re a CEO, COO, CFO, and the like, are you holding L&D to account? Because much of what I see coming out of L&D doesn’t stand up to scrutiny. As I’ve cited in books and presentations, there’s evidence that L&D isn’t up to scratch. And I think you should know a few things that may be of interest to you. So here’re some L&D myths you might want to watch out for.

  1. If it looks like school, it must be learning. We’ve all been to school, so we know what learning looks like, right? Except, do you remember how effective school actually was? Did it give you many of the skills you apply in your job now?  Maybe reading and writing, but beyond that, what did you learn about business, leadership, etc? And how  did you learn those things? I’ll bet not by sitting and listening to lectures presented via bulletpoints. If it looks like schooling, it’s probably a waste of time and money. It should look more like lab, or studio.
  2. If we’re keeping our  efficiency in line with others, we’re doing good. This is a common belief amongst L&D: well, our [fill in the blank: employees served per L&D staff member | costs per hour of training | courses run per year | etc.] is the same or better than the industry average, so we’re doing good. No, this is all about efficiency, not effectiveness. If they’re not reporting on measurable changes in the improvement of business metrics, like sales, customer service, operations,e tc, they’re not demonstrating their worth. It’s a waste of money.
  3. We produce the courses our customers need. Can they justify that? It’s a frequent symptom that the courses that are asked for have little relation to the actual problem. There are many reasons for performance problems, and a reliable solution is to throw a course at it. Without knowing whether it’s truly a function of lack of skill. Courses can’t address problems like the wrong incentives, or a lack of resources. If you’re not ensuring that you’re only using courses when they make sense, you’re throwing away money.
  4. Job aids aren’t our job.  Performance should be the job, not just courses. As Joe Harless famously said: “Inside every fat course there‘s a thin job aid crying to get out.” There are many times when a job aid is a better solution than a course. To believe otherwise is one of the classic L&D myths. If they’re avoiding taking that on, they’re avoiding a cheaper and more effective solution.
  5. Informal learning isn’t our job. Well, it might not be if L&D truly doesn’t understand learning, but they should. When you’re doing trouble-shooting, research, design, etc., you don’t know the answer when you start. That’s learning too, and there is a role for active facilitation of best principles. Assuming people know how to do it isn’t justifiable. Informal learning is the key to innovation, and innovation is a necessary differentiation.
  6. Our LMS is all we need. Learning management systems (which is a misnomer, they’re course management systems) manage courses well. However, if they’re trying to also be resource portals, and social media systems, and collaboration tools, they’re unlikely to be good at all that. Yet those are also functions that affect optimal performance and continual innovation (the two things I argue  should be the remit of L&D). Further, you want the right tool for the job. One all-singing, all-dancing solution isn’t the way to bet for IT in general, and that holds true for L&D as well.
  7. Our investment in evaluation instruments is valuable. If you’re using some proprietary tools that purport to help you identify and characterize individuals, you’re probably being had. If you’re using it for hiring and promotion, you’re also probably violating ethical guidelines. Whether personality, or behavior, or any other criteria, most of these are methodologically and psychometrically flawed. You’re throwing away money. We have a natural instinct to categorize, but do it on individual performance, not on some flawed instrument.
  8. We have to jump on this latest concept.  There’re a slew of myths and misconceptions running around that are appealing and yet flawed. Generations, learning styles, attention spans, neuro-<whatever> and more are all appealing, and also misguided. Don’t spend resources on investing in them without knowing the real tradeoffs and outcomes.These are classic L&D myths.
  9. We  have to have this latest technology. Hopefully you’re resistant to new technologies unless you know what they truly will do for your organization. This holds true for L&D as well. They’re as prone to lust after VR and AR and AI as the rest of the organization. They’re also as likely to spend the money without knowing the real costs and consequences. Make sure they’re coming from a place where they know the unique value the technology brings!

There’s more, but that’s enough for now. Please, dig in. Ask the hard questions. Get L&D to be scrutable for real results, not platitudes. Ensure that you’re not succumbing to L&D myths. Your organization needs it, and it’s time to hold them to account as you do the rest of your organization. Thanks, and wishing you all the best.

Something that emerged from a walk, and, well, I had to get it off my chest. I welcome your thoughts.

ID Support Thyself

2 March 2021 by Clark Leave a Comment

Want to dig a bit deeper into improving design processes. Here, I look at tools,  asking IDs to ‘support thyself’.

As usual, the transcript:


One of the things I do is help organizations improve their design processes. Last week, I talked about when to team up in the process of learning design. Another component of good design, besides knowing when and how to draw in more minds, is baking learning science into your processes. That‘s where tools help. I expect that most orgs do have process support, but…baking in learning science seems not to be there. So here I‘m exhorting IDs to ‘Support Thyself’.  

As I discuss in my forthcoming book, there are nuances to each of the elements of learning design (as I also talked about for Learnnovators). That includes meaningful practice, useful models, motivating intros, and more. The question is how to help ensure that as you develop them, you make sure to address all the elements.

One approach, of course, is to use checklists. Atul Gawande has made the case for checklists in his The Checklist Manifesto.  In this great book, he talks about his own inspiring efforts in the context of other high-risk/high-value endeavors such as flight and construction.   There are clear benefits.

The point is that checklists externalize the important elements, supporting us in not forgetting them. It‘s easy when you do yet another task, to think you‘ve completed a component because you‘ve done it so many times before. Yet this can lead to errors. So having an external framework is useful. That‘s part of the rationale behind the Serious eLearning Manifesto!

I had originally been thinking about templates, and that‘s another way. And here, I‘m not talking about tarted-up quiz show templates. Instead, I mean a tool that leaves stubs for the important things that should be included. In examples, for instance, you could leave a placeholder for referencing the model, and for the underlying thinking. Really, these are checklists in another format.  All in all, these are ways that you can  Support Thyself!

What you don‘t want to do is make it too constraining. You want to create a minimum floor of quality, without enforcing a ceiling. At least other than the ones your own schedule and budget will import. But you want to be creative while also maintaining effectiveness.

And you can do this in your authoring tool. Just as you may have a template you reuse to maintain look and feel, you can have placeholders for the elements. You can also provide guidance for the elements, in a variety of ways.

There are lots of forms of performance support. And, just as we should be using them to assist our performers (even doing backwards design to design the tools first then any learning), we should be using them to overcome our own cognitive limitations. Our cognitive architecture is amazing, but it‘s prone to all sorts of limitations (there‘s no perfect answer). We can suffer from functional fixedness, set effects, confirmation bias, and more.  

I‘ll admit that I created an ID checklist. The only problem was it had 178 elements, which might be unwieldy (though it did go through the whole process). But you should make sure that whatever tools you do have cover the necessary elements you need. I did create a more reasonable one to accompany my ‘Make it Meaningful‘ initiative (coming soon to a theater or drive-in near you).  

Our brains have limitations that influence our ability to design. Fortunately, we can use technology as support to minimize the impact of those limitations and maximize the contributions of our outcomes. And we should. Thus, my encouragement for IDs to Support Thyself!

Separate content from description

29 December 2020 by Clark 3 Comments

Once again facing folks who aren’t using styles, I was triggered to think more deeply about the underlying principle. That is, to separate content from description. It’s a step forward in what we can do with systems to bring about a more powerful human-aligned system.

And, as always, here’s the text, in case you (like me) prefer to read ;).


I‘ve ranted before about styles, but I want to make a slightly different pitch today. It‘s not just about styles, it‘s about the thinking behind it. The point is to separate content from description.

So, the point about styles is that they‘re a definition of formatting. You have elements of documents like headings at various levels, and body text, and special paragraphs like quotes, and so on. Then you have features, like font size, bolding and italics, color, etc. And what you see, too often, is people hand-formatting documents, choosing to do headers by increasing the font size, bolding, etc. And, importantly, having to go through and change them all manually if there‘s a desire for a change in look.  

The point of styles is instead merely to say this is a heading 1, this is a figure, this is a caption, and so on. Then, you separately say: heading 1s will be font size 16 bold and left-justified. Figures and captions will be centered, in font size 12. And so on. Then, should someone want to change how the document‘s formatted, you just change the definition of heading 1, and all the heading ones change.  

It goes further. You can define that all heading ones have a page break before (e.g. a new chapter in a book). And you can define new styles, like for a callout box (e.g. colored background), etc. You can have different heading ones for a book than for a white paper. And some styles can be based on others. So your headings can use the same font as your body text, and if you want it all to change, you change the source and the rest will change.  

Which is wonderful for writing, but the concept behind this is what‘s really important to get your head around. That is, separating out role from description. That‘s what‘s led me to be keen on content systems. The notion of pulling up content by description instead of hardwiring together content into an experience is the dream.  

It‘s all about beginning to use semantics, that is the meaning of things, as a manipulable tool. Many years ago, I led a project creating an adaptive learning system. We were going to have content objects defined by topic, and learning role, and tagged in terms of media, difficulty, and more. So you could   say: “pull a video on an example of diversity set in a sales office”. Our goal, with a suite of rules about what when to move up or back in difficulty, was to specify what learning content the learner should see next, etc.  

This is how adaptive platforms work. When Amazon or Netflix make a recommendation for you, there‘s not someone watching your behavior, instead it‘s a set of rules matching your particular actions to content recommendations. If you‘ve ordered a lot of British mysteries, and you haven‘t seen a particular series that lots of other people like, it‘ll be likely to be offered to you.  

This is the opportunity of the future. We can start doing this with learning (and coaching)!   We can start pulling together your learning goals, job role, current progress, current location (in time and space), etc, and offer you particular things that are appropriate for you. And, like our learning system, it might be recommendations of content, but you can choose others, or ignore, or…As Wayne Hodgins used to say, present the ‘right stuff‘: the right content to the right person at the right time in the right way….

The point being, just like styles, if we stop hardwiring things together, hand-formatting learning experiences, we can start offering personalized and even adaptive learning. Yes, there are technical backend issues, and more rigor in development, but this is the direction we can, and should, go. At least, that‘s my proposal, what say you?

Personalized and adaptive learning

6 October 2020 by Clark 1 Comment

For reasons that are unclear even to me, I was thinking about personalized versus adaptive learning. They’re similar in some ways, but also different. And a way to distinguish them occurred to me. It’s kinda simplistic, but I think it may help to differentiate personalized and adaptive learning.

As background, I led a project to build an intelligently adaptive learning platform. We were going to profile learners, but then also track their ongoing behavior. And, on this basis, we’d serve up something appropriate for learner X versus learner Y. (We’d actually recommend something, and they could make other choices.)

It was quite the research endeavor, actually, as the CEO had been inspired by Guilford’s learning model. I dug into that and all the learning styles literature, and cognitive factor analysis, and content models around learning objectives, and revisited my interest in intelligent tutoring, and more. I was able to hire a stellar team, and create an approach that was scientifically scrutable (e.g. no learning ‘styles’ :). We got it up and running before, well, 2001 happened and the Internet bubble burst and…

In some sense, the system was really both, in the way I’m thinking about it. I’ve seen different definitions, and one has adaptive as a subset of personalized, but I’m going a different way. I think of personalized as pre-planned alternatives for different groups, whereas adaptive reacts to the learner’s behavior.

Our use of initial profiling, if we only used that, would be personalized. The ongoing adaptation is what made it adaptive. We had rules that would prioritize preferences, but we’d also use behavior to update the learner model. It’s something they’re doing now, but we had it a couple of decades ago.

So, my simple way of thinking about personalized versus adaptive is that personalized is based upon who you are: your role, largely. We’d swap out examples on marketing for people selling services versus those selling products, for instance. Or if we’re talking negotiation, a vendor might get a different model than a lawyer.

Adaptive, on the other hand, is based upon what you  do. So, for instance, if you did poorly on the last problem, we might not give you a more difficult one, but give you another at the same level. Twice in a row doing badly, we might bring you another example, or even revisit the concept. This is what intelligent tutoring systems do, they just tend to require a rigorous model of expertise.

Of course, you could get more complicated. Personalization might have a more and less supportive path, depending on your anxiety and confidence. Similarly, adaptive might throw in an encouraging remark while showing some remedial materials.

At any rate, that’s how I differentiate personalized and adaptive learning. Personalized is pre-set based upon some determined differences that suggest different learning paths. Adaptive calculates on the fly and changes what the learner sees next.  How do you see it?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok