DevLearn opened with a keynote from Sophia the Robot. With an initially scripted presentation, and some scripted questions from host David Kelly, Sophia addresses the differences between AI and robots, with a bit of wit. The tech used to make the illusion was explored, but the technology was put to the test with some unscripted questions, and the responses were pretty good. An interesting start!
Tools for LXD?
I’ve been thinking on LXD for a while now, not least because I’ve an upcoming workshop at DevLearn in Lost Wages in October. And one of the things I’ve been thinking about are the tools we use for LXD. I’ve created diagrams (such as the Education Engagement Alignment), and quips, but here I’m thinking something else. We know that job aids are helpful; things like checklists, and decision trees, and lookup tables. And I’ve created some aids for the Udemy course on deeper elearning I developed. But here I want to know what you are using as tools for LXD? How do you use external resources to keep your design on track?
The simple rationale, of course, is that there are things our brains are good at, and things they’re not. We are pattern-matchers and meaning-makers, naturally making up explanations for things that happen. We’re also creative, finding solutions under constraints. Our cognitive architecture is designed to do this; to help us adapt to the first-level world we evolved in.
However, our brains aren’t particularly good at the second-level world we have created. Complex ideas require external representation. We’re bad at remembering rote and arbitrary steps and details. We’re also bad at complex calculations. This makes the case for tools that help scaffold these gaps in our cognition.
And, in particular, for design. Design tends to involve complex responses, in this case in terms of an experience design. That maps out over content, time, and tools. Consequently, there are opportunities to go awry. Therefore, tools are a plausible adjunct.
You might be using templates for good design. Here, you’d have a draft storyboard, for instance, that insures you’re including a meaningful introduction, causal conceptual model, examples, etc. Or you might have a checklist that details the elements you should be including. You could have a model course that you use as a reference.
My question, to you, is what tools are you using to increase the likelihood of a quality design, and how are they working for you? I’d like to know what you’ve found helpful as tools for LXD, as I look to create the best support I can. Please share!
Level of polish?
A debate broke out amongst some colleagues the other day about the desirable level of polish in our elearning. One colleague was adamant that we were undermining our position by using low quality production. There was a lot of agreement. I had a slightly different view. Even after finding out he was talking more about external-facing content than internal, I still have some differences. After weighing in, I thought it required a longer response, and of course it has to go here.
So, the main complaint was that so much elearning looks dated and incomplete. And I agree! And others chimed in that this doesn’t have to be, while all agreed that it doesn’t need to approach game quality in effect. Then, in my mind, the question switches to “what is good enough?” And I think we do need an answer to that. And, it turns out, to also answer “and what does it take?”
What is good enough?
So, my first concern is the quality of the design. My mantra on design states that it has to be right first. Then you can implement it. If it isn’t right from the get-go, it doesn’t matter how you implement it. And the conversation took some time to sort this out. But let’s assume that the design’s right. Then, how much production values do you need?
The original complaint was that we’re looking slack by comparison. When you look at what’s being done in other, related, fields, our production values look last decade, if not last century! And I couldn’t agree more. But does that matter? And that’s where we start getting into nuances. My bottom line question is: “what’s the business case?”
So, I suggest that the investment in production values is based upon how important the ‘experience’ is. If it’s internal, and it’s a critical skill, the production values should be only enough to ensure that learners can identify the situation and perform appropriately (or get feedback). It needs a minimum level of professionalism, and that’s it. If you’re selling it to high-end customers and want to charge a premium price, you’ll need much more, of course.
The issue was that we’re losing credibility if we don’t approach a minimal level of competency. There were many arguments about the locus: fear of going out of bounds, managers oppression, low level tools, lack of skills, and more. And these all have validity. We should stipulate a minimal level. Perhaps the serious eLearning Design Manifesto? :) We can do better.
What does it take?
This was the other issue. It was pointed out that design teams in other disciplines work in layers: from concept to realization. Jesse James Garrett has a lovely diagram that represents this for information architecture. And others pointed out that there are multiple skills involved, from dialog writing, through media production and interface design (they’re conceptually separate), and the quality of the programming and more. The more you need polish, the more you need to invest in the appropriate skill sets. This again is a matter of marshaling the appropriate resources against the business case.
I think one of the issues is that we overuse courses when other solutions are more effective and efficient. Thus, we don’t have and properly allocate the resources to do the job right when it does positively absolutely has to be in the head. Thus, we do have a lot of boring, information dump courses. And we could be doing more with engaging practice, and less content presentation. That’s a design issue to begin, and then a presentation one.
Ultimately, I agree that bad elearning undermines our credibility. I do think, however, that we don’t need unnecessary polish. Gilded bad design is still bad design. But then we should align our investment with the professional reception we need. And if we have trouble doing that, we need to rethink our approaches. The right level of investment for the context is the right response; we need the right live of polish. But the assessment the context is complex. We shouldn’t treat is simplistically, but instead systemically. If we get that right, we have a chance to impress folks with our astute sense of doing the right thing with the right resources. Less than that is a path to irrelevancy, and doing more is a path to redundancy. Where do you want to go?
Graham Roberts #Realities360 Keynote Mindmap
Graham Roberts kicked off the 2nd day of the Realities 360 conference talking about the Future of Immersive Storytelling. He told about their experiences and lessons building an ongoing suite of experiences. From the first efforts through to the most recent it was insightful. The examples were vibrant inspirations.
Stephanie Llamas #Realities360 Keynote Mindmap
Stephanie Llamas kicked off the Realities 360 conference by providing an overview of VR & AR industry. As a market researcher, she made the case for both VR and AR/MR. With trend data and analysis she made a case for growth and real uses. She also suggested that you need to use it correctly. (Hence my talk later this day.)
Working virtually
Of late, I’ve been involved in two separate initiatives that are distributed, one nationally, one internationally. And, as with some other endeavors, I’ve been using some tools to make this work. And, finally, it really really is. I’m finding it extraordinarily productive to be working virtually.
In both endeavors, there’s trust. One’s with folks I know, which makes it easy. The other’s with folks who have an international reputation for scholarly work, and that generates an initial acceptance. Working together quickly generates that.
Working
The work itself, as with most things, comes down to communication, collaboration, and cooperation. We’ve got initiatives to plan, draft, review, and execute. And we need to make decisions.
We’re using one social media tool to coordinate. In both cases, we’re using Slack as the primary tool for asynchronous communications. We’re setting up meetings (sometimes with the help of Doodle), asking questions, updating on occurrences, and sharing thoughts.
We’re using different tools for synchronous sessions. In one, we’re using Zoom, Blue Jeans in the other. I like Zoom a bit better because when you open the chat or the list of participants, it expands the window. In Blue Jeans, it covers a bit of the screen. Both, however, handle video streams without a problem.
And, for both, we’re using Google tools to create shared representations. Documents, and occasionally spreadsheets, mostly. I’m experimenting with their draw tools; while they’re not as smooth as OmniGraffle, they’re quite robust. It’s even fun to be working together watching several of us editing a doc at the same time!
There are always the hiccups; sometimes one or another can’t attend a meeting, or we lose track of files, but nothing that doesn’t plague co-located work. One problem that’s unique is those folks who aren’t regular users of one or the other tools. But we’ve enough peer pressure to remedy that. And, of course, these are folks who are in tech…
Reflecting
One key element, I think, is the ‘working out loud’. It’s pretty easy to share, and people do. Thinking is largely out in the open. There’re subcommittees, for instance, that may work on specific issues, and some executive discussions, but little you can’t see.
And we’re unconsciously working in, and consciously working on, a desirable learning culture. We’re sharing safely, considering ideas fairly, taking time to reflect, and actively seeking diversity. We experiment, and we do serendipitously review our practices (particularly when we onboard new folks).
Most importantly, this is beginning to not only feel natural, but productive. This is the new world of work. Using tools to handle collaboration, coordination, and cooperation (the 3 c’s?). We’re working, and evolving too!
And, a key learning for me, is that this doesn’t preclude being co-located. Though I wonder if that would actually hurt, since hallway conversations can progress things but there’re no trails. Unless, I suppose, if you commit to immediately capture whatever emerges. That’s a cultural thing.
This working virtually is a direction I think will be productive for organizations going forward. It’s social, it’s augmented, and it’s culturally sound. It’s not to say that I won’t welcome the chance to be co-located with these folks at some point. There might even be hugs between folks who’ve never met before (that happens when you interact in a safe space online). But the important thing is that it works, well. And what else needs to be said, after all?
Quinnovations
I was talking with my lass, and reminiscing about a few things. And, it occurs to me, that I may not have mentioned them all. Worse, I confess, I’m still somewhat proud of them. So, at the risk of self-aggrandizement, I thought I’d share a few of my Quinnovations. There’s a bigger list here, but this is the ‘greatest hits’ list, with some annotation. (Note, I’ve already discussed the game Quest for Independence, one of my most rewarding works.)
One project was a game based upon my PhD topic. I proposed a series of steps involved in analogical reasoning, and tested them both alone and then after some training. I found some improvement (arguing for the value of meta-learning instruction). During my post-doc, a side project was developing a game that embedded analogical reasoning in a story setting. I created a (non-existent) island, and set the story in the myths of the voodoo culture on it. The goal was a research environment for analogical reasoning; the puzzles in the game required making inferences from the culture. Most players were random, interestingly, at a test, but a couple were systematic.
With a colleague, Anne Forster, we came up with an idea for an online conference to preface a face-to-face event. This was back circa 1996, so there weren’t platforms for such. I secured the programming assistance of a couple of the techs in the office I was working for (Open Net), and we developed the environment. In it, six folks reknown in their area conducted overlapping conversations around their topic. This set up the event, and saw vibrant discussions.
A colleague at an organization I was working for, Access Australia CMC, had come up with the idea of competition for school kids to create websites about a topic. With another colleague, we brainstormed a topic for the first running of the event. In it, we had kids report on innovations in their towns that they could share with other towns (anywhere). I led the design and implementation of the competition: site and announcements, getting it up and running. It ended up generating vibrant participation and winning awards.
Upon my return to the US, I led a team to generate a learning system that developed learners’ understanding of themselves as learners. Ultimately, I conceived of a model whereby we profiled learners as to their learning characteristics (NB: not learning styles) and adapted learning on that basis. There was a lot to it: a content model, rules for adaptation, machine learning for continuing improvement, and more. We got it up and running, and while it evaporated in 2001 (as did the organization we worked for), it’s legacy served me in several other projects. (And, while they didn’t base it on our system, to my knowledge, it’s roughly the same architecture being seen in Newton.)
Using the concept of that adaptive system, with one of my clients we pitched and won the right to develop an electronic performance support system. It ended up being a context-sensitive help system (which is what an EPSS really is ;). I created the initial framework which the team executed against (replacing a help system created by the system engineers, not the right team to do it). The design wrote content into a framework that populated the manual (as prescribed by law) and the help system. The client ended up getting a patent on it (with my name on too ;).
Last one I’ll mention for now, a content system for a publisher. They were going to the next generation of their online tool, and were looking for a framework to: incorporate their existing texts, guide the next generation of texts, and support multiple business models. Again pulling on that content structure experience, I gave them a structured content model that met their needs. The model was supposed to be coupled with a tech platform, and that project collapsed, meaning my model didn’t see the light of day. However, I was pleased to find out subsequently that it had a lasting impact on their subsequent works!
The point being that, in conjunction with clients and partners, I have been consistently generating innovations thru the years. I’m not an academic, tho’ I have been and know the research and theories. Instead, I’m a consultant who comes in early, applies the frameworks to come up with ideas that are both good and unique (I capitalize a lot on models I’ve collected over the years), and gets out quickly when I’m no longer adding value. Clients get an outcome that is uniquely appropriate, innovative, and effective. Ideas they likely wouldn’t have come up with on their own! If you’d like to Quinnovate, get in touch!
Chasing Technology Good and Bad
I’ve been complaining, as part of the myths tour, that everyone wants the magic bullet. But, as I was commenting to someone, there are huge tech opportunities we’re missing. How can I have it both ways? Well, I’m talking about two different techs (or, rather, many). The fact is, we’re chasing the wrong technologies.
The problem with the technologies we’re chasing is that we’re chasing them from the wrong beginning. I see people chasing microlearning, adaptive learning, video, sims, and more as the answer. And of course that’s wrong. There can’t be one all-singing all-dancing solution, because the nature of learning is remarkably diverse. Sometimes we need reminders, sometimes deep practice, some times individualization makes sense, and other times it’s not ideal.
The part that’s really wrong here is that they’re doing this on top of bad design! And, as I believe I’ve mentioned, gilded bad design is still bad design. Moreover, if people actually spent the time and money first on investing just in improving their learning design, they’d get a far better return on investment than chasing the latest shiny object. AND, later investments in most anything would be better poised to actually be worthwhile.
That would seem to suggest that there’s not a sensible tech to chase. After, of course, authoring tools and creating elearning. And that’s not true. Investment in, say, sims makes sense if you’re using it to implement good design (e.g. deep practice). As part of a good learning design strategy. But there’s something deeper I’m talking about. And I’ve talked about it before.
What I’m talking about are content systems. They may seem far down the pike, but let me (again) make the case about why they make sense now, and for the future. The thing is, being systematic about content has both short-term and long-term benefits. And you can use the short-term ones to justify the long-term ones (or vice-versa).
In the short term, thinking about content from a systems perspective offers you rigor. While that may seem off-putting, it’s actually a benefit. If you design your content model around good learning design, you are moving towards the first step, above, about good design. And, if you write good descriptions within those elements, you really provide a foundation that makes it difficult to do bad design.
My point is that we’re ignoring meaningful moves to chase chimera. There are real value steps to make, including formalizing design processes and tools about good design. And there are ways to throw your money away on the latest fad. It’s your choice, but I hope I’ve made a case for one interpretation. So, what’s yours?
PSA SPF
We interrupt your regularly scheduled blog series for this important public service announcement:
A number of times now, I’ve discovered that there was email being sent to me that I was not getting. Fortunately, my ISP is also a colleague, mentor, and friend and a real expert in cybersecurity, so I asked him. And he explained it to me (and then again when I’d forgotten and it happened again; sorry Sky!). So I’ll document it here so I can point to it in further instances. And it’s about domains and SPF, so it’s a wee bit geeky (and at the edge of my capability). Yet it’s also important for reducing spam, and I’m all for that. So here we go.
This started with an organization where I had been conversing with individuals. And eventually it became clear that they had sent me a form letter, as part of a bigger mailing, and assumed I had it while I was still asking about details in said form letter. Debugging this is how I found out what happened.
Now, when an org sends you email directly, your mail system tracks the paths it takes to get to you. If it goes back to the server for the org says the mail’s from, all’s good. For certain types of mails (e.g. event-related or service-related), however, those mails are sent via a service. A good mail server should check to see if the mail the service claims is really from the org. Otherwise, you could have a lot of people sending things pretending to be from one place but … can you say ‘spam’? Right.
So, what the org needs to do is create a really simple one-line bit of text in something called a Sender Policy Framework (SPF) record that says “they mail on my behalf”. E.g. the record lets the org publish a list of IP addresses or subnets that are authorized to send email on their behalf. And, seriously, this is simple enough that I can do it.
Yet somehow, some orgs don’t do this. Now, some mailers don’t check, but they should! That check to the DNS entry on behalf of the org to see if there’s an SPF covering the service will help reduce spam. So my ISP checks rigorously. And then I miss mail when people haven’t done the right thing in their tech set up. When I have this type of problem, it’s pretty much one of these.
Please, please, do check that your orgs get this right if they do use a service. That would be orgs doing mailing lists through external providers (e.g. small firms without the resources to purchase bulk mail systems). And you can ignore this if it doesn’t apply to you, but if you do have the symptoms, feel free to point people here to help them understand what to fix. I certainly will!
We now return you to your regularly scheduled blog, already in progress.
Learning Experience Portals?
What is a learning experience platform? Suddenly the phrase seems ubiquitous, but what does it mean? It’s been on my mental ‘todo’ list for a while, but I finally spent some time investigating the concept. And what I found as the underlying concept mostly makes sense, but I have some challenges with the label. So what am I talking about?
It’s ImPortal!
Some background: when I talk about the performance ecosystem, it’s not only about performance support and resources, but finding them. Ie, it includes the need for a portal. When I ask audiences “how many of you have portals in your org”, everyone raises their hands. What also emerges is that they have bunches of them. Of course, they’re organized by the business unit offering them. HR, product, sales, they all have their own portals. Which doesn’t make sense. What does make sense is to have a place to go for thing organized by people’s roles and membership in different groups.
A user-centered way of organizing portals makes sense then. People need to be able to see relevant resources in a good default organization, have the ability to reorganize to a different default, and search. Federate the portal and search over all the sources of resources, not some subset. I’ve suggested that it might make sense to have a system on top of the portals that pulls them together in a user-centric way.
An additional issue is that the contents of said portal should be open, in the sense that all users should be able to contribute their curated or created resources, and the resources can be in any format: video, audio, document, even interactive. In today’s era of increasing speed of change and decreasing resources for meeting the learning needs, L&D can no longer try to own everything. If you create a good culture, the system will be self-policing.
And, of course, the resources aren’t all about learning. Performance support is perfectly acceptable. The in-the-moment video is as needed as is the course on a new skill. Anything people want, whether learning resources from a library to that quick checklist should be supported.
The Learning Experience Platform(?)
As I looked into Learning Experience Platforms (LXP), (underneath all the hype) I found that they’re really portals; ways for content to be aggregated and made available. There are other possible features – libraries, AI-assistance, paths, assessments, spaced delivery – but at core they’re portals. The general claim is that they augment an LMS, not replace it. And I buy that.
The hype is a concern: microlearning for instance (in one article that referred to the afore-mentioned in-the-moment video, glossing over that you may learn nothing from it and have to access it again). And of course exaggerated claims about who does what. It appears several LMS companies are now calling themselves LXPs. I’ll suggest that you want such a tool designed to be a portal, not having it grafted onto to another fundamental raison-d’être. Similarly, many also claim to be social. Ratings would be a good thing, but also trying to be a social media platform would not.
Ultimately, such a capability is good. However, if I’m right, I think Learning Experience Platform isn’t the right term, really they’re portals. Both learning and experience are wrong; they can be perform in the moment, and generally they’re about access, not generating experiences. And I could be wrong.
Take-home?
Ecosystems should be integrated from best-of-breed capabilities. One all-singing, all-dancing platform is likely to be wrong in at least one if not more of the subsidiary areas, and you’re locked in. I think a portal is a necessary component, and the LXPs have many performance & development advantages for over generic portal tools.
So I laud their existence, but I question their branding. My recommendation is always to dig beneath the label, and find the underlying concept. For instance, each of the concepts underpinning the term microlearning is valuable, but the aggregation is problematic. Confusion is an opening for error. So too with LXP: don’t get it confused with learning or creating experiences. But do look to the genre for advanced portals. At least, that’s my take: what’s yours?