I just had an article published on Rethinking elearning that’s another take on the point of my last post on designing for an uncertain world. It’s in the well-edited Learning Solutions ezine published by the eLearning Guild. Check it out!
Reflections on ISPI 2010
Early in the year, I gave a presentation online to the Massachusetts chapter of ISPI (the international society for performance improvement), and they rewarded me with a membership. A nice gesture, I figured, but little more (only a continent away). To my benefit, I was very wrong. The ISPI organization gave each chapter a free registration to their international conference, which happens to be in San Francisco this year (just a Bart trip away), and I won! (While the fact that my proximity may have been a factor, I’m not going to do aught but be very grateful and feel that the Mass chapter can call on me anytime.). Given that I just won a copy of GPS software for my iPhone (after seemingly never winning anything), I reckon I should buy a lottery ticket!
Now, it probably helps to explain that I’ve been eager to attend an ISPI conference for quite a while. I’m quite attracted to the HPT (Human Performance Technology) framework, and I’m ever curious. I even considered submitting to the conference to get a chance to attend, but their submission processes seemed so onerous that I gave up. So, I was thrilled to get a chance to finally visit.
Having completed the experience, I have a few reflections. I think there’s a lot to like about what they do, I have some very serious concerns, and I wish we could somehow reconcile the too-many organizations covering the same spaces.
I mentioned I’m a fan of the HPT approach. There are a couple of things to like, including that they start by analyzing the performance gaps and causes, and are willing to consider approaches other than courses. They also emphasize a systems approach, which I can really get behind. There were some worrying signs, however.
For instance, I attended a talk on Communities of Practice, but was dismayed to hear discussion of monitoring, managing, and controlling instead of nurturing and facilitation. While there may need to be management buy-in, it comes from emergent value, not exec-dictated outcomes the group should achieve!
Another presentation talked about the Control System Model of Management. Maybe my mistake to come to OD presentations at ISPI, but it’s this area I’m interested via my involvement in the Internet Time Alliance. There did end up being transparency and contribution, but it was almost brought in by stealth, as opposed to being the explicit declarations of culture.
On the other hand, there were some positive signs. They had enlightened keynotes, e.g. one talking about Appreciative Inquiry and positive psychology that I found inspiring, and I attended another on improv focusing on accepting the ‘offer’ in a conversation. And, of course, Thiagi and others talked about story and games.
One surprise was that the technology awareness seems low for a group with technology in their prized approach. Some noticed the lack of tweets from the conference, and there wasn’t much of a overall technology presence (I saw no other iPads, for instance). I challenged one of the editors of their handbook, Volume 1 (which I previously complained didn’t have enough on informal learning and engagement) about the lack of coverage of mobile learning, and he opined that mobile was just a “delivery channel”. To be fair, he’s a very smart and engaging character, and when I mentioned context-sensitivity, he was quite open to the idea.
I attended Guy Wallace‘s presentation on Enterprise Process Performance Improvement, and liked the structure, but reckon that it might be harder to follow in more knowledge-oriented industries. It was a pleasure to finally meet Guy, and we had a delightful conversation on these issues and more, with some concurrence on the thoughts above. As a multiple honoree at the conference, there is clearly hope for the organization to broaden their focus.
Overall, I had mixed feelings. While I like their rigor and research base, and they are incorporating some of the newer positive approaches, it appears to me that they’re still very much mired in the old hierarchical style of management. Given the small sample, I reckon you should determine for yourself. I can clearly say I was grateful for the experience, and had some great conversations, heard some good presentations, and learned. What more can you ask for?
Designing for an uncertain world
My problem with the formal models of instructional design (e.g. ADDIE for process), is that most are based upon a flawed premise. The premise is that the world is predictable and understandable, so that we can capture the ‘right’ behavior and train it. Which, I think, is a naive assumption, at least in this day and age. So why do I think so, and what do I think we can (and should) do about it? (Note: I let my argument lead where it must, and find I go quite beyond my intended suggestion of a broader learning design. Fair warning!)
The world is inherently chaotic. At a finite granularity, it is reasonably predictable, but overall it’s chaotic. Dave Snowden’s Cynefin model, recommending various approaches depending on the relative complexity of the situation, provides a top-level strategy for action, but doesn’t provide predictions about how to support learning, and I think we need more. However, most of our design models are predicated on knowing what we need people to do, and developing learning to deliver that capability. Which is wrong; if we can define it at that fine a granularity, we bloody well ought to automate it. Why have people do rote things?
It’s a bad idea to have people do rote things, because they don’t, can’t do them well. It’s in the nature of our cognitive architecture to have some randomness. And it’s beneath us to be trained to do something repetitive, to do something that doesn’t respect and take advantage of the great capacity of our brains. Instead, we should be doing pattern-matching and decision-making. Now, there are levels of this, and we should match the performer to the task, but as I heard Barry Schwartz eloquently say recently, even the most mundane seeming jobs require some real decision making, and in many cases that’s not within the purview of training.
And, top-down rigid structures with one person doing the thinking for many will no longer work. Businesses increasingly complexify things but that eventually fails, as Clay Shirky has noted, and adaptive approaches are likely to be more fruitful, as Harold Jarche has pointed out. People are going to be far better equipped to deal with unpredictable change if they have internalized a set of organizational values and a powerful set of models to apply than by any possible amount of rote training.
Now think about learning design. Starting with the objectives, the notion of Mager, where you define the context and performance, is getting more difficult. Increasingly you have more complicated nuances that you can’t anticipate. Our products and services are more complex, and yet we need a more seamless execution. For example trying to debug problems between hardware device and network service provider, and if you’re trying to provide a total customer experience, the old “it’s the other guy’s fault” just isn’t going to cut it. Yes, we could make our objectives higher and higher, e.g. “recognize and solve the customer’s problem in a contextually appropriate way”, but I think we’re getting out of the realms of training.
We are seeing richer design models. Van Merrienboer’s 4 Component ID, for instance, breaks learning up into the knowledge we need, and the complex problems we need to apply that knowledge to. David Metcalf talks about learning theory mashups as ways to incorporate new technologies, which is, at least, a good interim step and possibly the necessary approach. Still, I’m looking for something deeper. I want to find a curriculum that focuses on dealing with ambiguity, helping us bring models and an iterative and collaborative approach. A pedagogy that looks at slow development over time and rich and engaging experience. And a design process that recognizes how we use tools and work with others in the world as a part of a larger vision of cognition, problem-solving, and design.
We have to look at the entire performance ecosystem as the context, including the technology affordances, learning culture, organizational goals, and the immediate context. We have to look at the learner, not stopping at their knowledge and experience, but also including their passions, who they can connect to, their current context (including technology, location, current activity), and goals. And then we need to find a way to suggest, as Wayne Hodgins would have it, the right stuff, e.g. the right content or capability, at the right time, in the right way, …
An appropriate approach has to integrate theories as disparate as distributed cognition, the appropriateness of spaced practice, minimalism, and more. We probably need to start iteratively, with the long term development of learning, and similarly opportunistic performance support, and then see how we intermingle those together.
Overall, however, this is how we go beyond intervention to augmentation. Clive Thompson, in a recent Wired column, draws from a recent “man+computer” chess competition to conclude “serious cognitive advantages accrue to those who are best at thinking alongside machines”. We can accessorize our brains, but I’m wanting to look at the other side, how can we systematically support people to be effectively supported by machines? That’s a different twist on technology support for performance, and one that requires thinking about what the technology can do, but also how we develop people to be able to take advantage. A mutual accommodation will happen, but just as with learning to learn, we shouldn’t assume ‘ability to perform with technology augmentation’. We need to design the technology/human system to work together, and develop both so that the overall system is equipped to work in an uncertain world.
I realize I’ve gone quite beyond just instructional design. At this point, I don’t even have a label for what I’m talking about, but I do think that the argument that has emerged (admittedly, flowing out from somewhere that wasn’t consciously accessible until it appeared on the page!) is food for thought. I welcome your reactions, as I contemplate mine.
The (Initial) iPad Experience
I’m usually a late adopter of new technology, largely because I’m frugal. I don’t like to spend money until I know just what the value is that I will be getting. So, when I heard about the iPad, I wasn’t one of those who signed up in advance. Which isn’t to say that I didn’t have a case of techno-lust; I am a geek, a boy who loves his toys. And, after all, I am on the stump about mobile learning.
So, I followed the developments closely. I looked at the specs, and I tracked the software app announcements. And I reflected a lot about the potential learning applications of this new platform.
The decision
What I didn’t expect was to get transfixed by a new possibility: that this device could provide a new capability to me, that of a laptop replacement. When I travel, I use my laptop to work; I write, I diagram, I create presentations, and catch up on email. The iPad, however, was announced as coming with (or having available) software for word processing (Pages), diagramming (OmniGraffle), presentations (Keynote), and email (Mail). It would also, when in range of WiFi, do standard web stuff like browse the web and use Twitter.
It began to look like maybe this device did have a justifiable case, such that m’lady was agreeable. There were some considerations: did I need the 3G version, which would come later, and how much memory (16, 32, or 64 GB)? Given that I already have an iPhone, which would meet immediate email, twitter, and/or web needs when not in WiFi range, I figured I could go with the first one coming out. However, my iPhone at 16 GB is already half full, and I’d likely be adding more apps and documents, so I thought I better go for 32 GB (I also figured with aggressive memory management, I could skip the 64 GB version). So my decision was made, with one problem.
The purchase
I hadn’t signed up for delivery, and now that deadline was being pushed out. And, I had a trip planned before the next shipping date. Now that I’d decided I could use it as a laptop substitute, I already wanted it. I wasn’t frantic, and I hate to wait in lines, so I wasn’t going to queue up at the Apple store. However, I did discover that other Apple retailers would have them, particularly BestBuy, which has a nearby store. So my plan was made: I would swing by there just around opening time, and if there wasn’t a huge queue, I’d see if they had any left. I wasn’t particularly optimistic.
So, after breakfast on the 3rd, I headed out in time to get there 5 minutes before they opened, and while there was a small queue, it wasn’t too bad. I checked it out, and a guy told me that they’d been handing out tickets for the iPad, and they seemed to have plenty. They didn’t come out again before the doors opened, but I knew I’d have my answer, one way or another, in a few minutes. And lo and behold, they had stacks of iPads. My transaction was complete within 7 minutes of the door opening, and I had my new device! And once I tweeted this outcome, I very shortly thereafter had several requests for this blog post! (The Apple lady in the BestBuy said the same thing happened with the iPhone releases: queues at the Apple store, and walkin service at the BestBuy; now you, good reader, are in on the secret.)
I also had to accessorize. BestBuy didn’t have the case, but I got a neoprene one with a pocket. Then Apple did have the case when I called, so I swung by late in the day. The place was packed but they also had iPads left! I also got the display adapter so I can present from it, and AppleCare.
The experience
Now it was time to play. I got it home, connected it to my Mac, and started setting it up. One almost immediate surprise was that it wasn’t charging. Turns out that is not uncommon, you need to have a relatively powerful USB port to both power and synch, and I guess my old laptop isn’t up to the job. However, it was fully charged and I got 2 days of intermittent use before it got close to needing a charge. Still, a bit of a pain to swap the cable between synching and charging… However, it was recognized right away and synched just fine. I made a mistake and synched everything (all photos, music, etc) when I really just wanted the limited set I had on the iPhone, but I was able to rectify that.
And then it was time for software. I’m as frugal with software as with hardware, so while I took some interesting free new IPad apps (solitaire, I confess , as well as weather and news, a calculator, etc), I was more picky with paid software. I did get Pages and Keynote (not Numbers, as I’m not a spreadsheet jockey, tho’ I may well get it if I start getting a lot of Excel sheets). I also got a PDF reader, GoodReader, as I had an immediate need. So far I’ve held off of OmniGraffle (which I *love* on the Mac), as it’s surprisingly expensive and the first reviews suggested it may have speed and interface problems. I’ll keep tracking the reviews, as I have faith in the company. I’m looking for a good note taking option, that will handle both text and sketches and/or quick diagrams, but the one or two examples I’ve found have had mixed reviews.
Finally, the ultimate question is how does it work. And the answer is, very very well. The battery life is almost phenomenal. The display is superb. The overall user experience is compelling. Tweetdeck (Twitter), Safari (browser), and Mail look great and work effectively. And it’s pretty functional; I’m touch-typing this with the onscreen keyboard in Pages on the plane, with the case folded to hold the iPad in landscape, tilted so I can use the onscreen keyboard and still see the screen.
There’s still some software to come (diagramming, note-taking with sketches), and accessories (maybe bluetooth keyboard), but it’s working well for me already. (Update: it was a battle to get this posted!) I’m on a short 3 day trip with just the iPad and iPhone, laptop-less, and we’ll see how it goes. (I do have a pad of paper and a pen. :) Stay tuned!
Mea Culpa and Rethink on Pre-tests
Well, it turns out I was wrong. I like to believe it doesn’t happen very often, but I do have to acknowledge it when I am. Let me start from the worst, and then qualify it all over the place ;).
In the latest Scientific American Mind, there is an article on The Pluses of Getting It Wrong (first couple paragraphs available here). In short, people remember better if they first try to access knowledge that they don’t have, before they are presented with the to-be-learned knowledge. That argues that pre-tests, which I previously claimed are learner-abusive, may have real learning benefits. This result is new, but apparently real. You empirically have better recall for knowledge if you tried to access it, even though you know you don’t have it. My cognitive science-based explanation is that the search in some ways exercises appropriate associations that make the subsequent knowledge stick better.
Now, I could try to argue against the relevance of the phenomenon, as it’s focused on knowledge recovery which is not applied, and may still lead to ‘inert knowledge’ (where you may ‘know it’, but you don’t activate it in relevant situations). However, it is plausible that this is true for application as well. Roger Schank has argued that you have to fail before you can learn. (Certainly I reckon that’s true with overconfident learners ;). That is, if you try to solve a problem that you aren’t prepared for, the learning outcome may be better than if you don’t. Yet I don’t think it’s useful to deny this result, and instead I want to think about what it might mean for still creating a non-aversive learner experience.
I still believe that giving learners a test they know they can’t pass at best seems to waste their time, and at worst may actually cause some negative affect like lack of self-esteem. Obviously, we could and should let them know that we are doing this for the larger picture learning outcome. But can we make the experience more ‘positive’ and engaging?
I think we can do more. I think we can put the mental ‘reach’ in the form of problem-based learning (this may explain the effectiveness of PBL), and ask learners to solve the problem. That is, put the ‘task’ in a context where the learner can both recognize the relevance of the problem and is interested in it. Once learners recognize they can’t solve the problem, they’re motivated to learn the material. And they should be better prepared mentally for the learning, according to this result. While it *is*, in a sense, a pre-test, it’s one that is connected to the world, is applied, and consequently is less aversive. And, yes, you should still ensure that it is known that this is done to achieve a better outcome.
Now, I can’t guarantee that the results found for knowledge generalize to application, but I do know that, by and large, rote knowledge is not going to be the competitive edge for organizations. So I’d rather err on the side of caution and have the learners do the mental ‘reach’ for the answer, but I do want it to be as close as possible to the reach they’ll do when they really are facing a problem. If there is (and please, do ensure there really is, don’t just take the client’s or SME’s word for it), then you may want to take this approach for that knowledge too, but I’m (still) pushing for knowledge application, even in our pre-tests.
So, I think there’s a revision to the type of introduction you use to the content, presenting the problem or type of problem they’ll be asked to solve later and encouraged to have an initial go at it before the concept, examples, etc are presented. It’s a pre-test, but of a more meaningful and engaging kind. Love to see any experimental investigation of this, by the way.
The Great ADDIE Debate
At the eLearning Guild’s Learning Solutions conference this week, Jean Marripodi convinced Steve Acheson and myself to host a debate on the viability of ADDIE in her ID Zone. While both of us can see both sides of ADDIE, Steve uses it, so I was left to take the contrary (aligning well to my ‘genial malcontent’ nature).
This was not a serious debate, in the model of the Oxford Debating Society or anything, but instead we’d agreed that we were going to go for controversy and fun in equal measures. This was about making an entertaining and informative event, not a scientific exploration. And in that, I think we succeeded (you can review the tweet stream from attendees and some subsequent conversation). Rather than recap the debate (Gina Minks has a short piece in her overall summary of the day), I’ll recap the points:
The pros:
- ADDIE provides structured guidance for design
- ADDIE includes a focus on implementation and evaluation
- ADDIE serves as a valuable checklist to complement our idiosyncratic design habits
The cons:
- ADDIE is inherently a waterfall model, and needs patching to accommodate iterative development and rapid prototyping
- People use ADDIE too much as a crutch for design without taking responsibility for using it appropriately
- It assumes courses
The pragmatics:
Steve showed how he does take responsibility, putting evaluation in the middle and using it more flexibly. He uses Dick & Carey’s model to start with, ensuring that a course is the right solution. The fact that the initial ‘course, job aid, other problem’ analysis is not included, however, is a concern.
It also came out that having a process is a powerful argument against those who might try to press unreasonable production constraints on you. If a VP wants it done in an unreasonable time frame, or doesn’t want to allow you to question the analysis that a course is needed, you have a push back (“it’s in our process”), particularly in a process organization. You do want a process.
The Alternatives:
The obvious question came up about what would be used in place of ADDIE. I believe that ADDIE as a checklist would be a nice accompaniment to both a more encompassing and a more learning-centric approach. For the former, I showed the HPT model as a representation of a design approach considering courses as part of a larger picture. For the latter, I suggested that a focus on learning experience design would be appropriate.
Using an HPT-like approach first, to ensure that a course is the right solution, is necessary. Then, I’d focus on working backwards from the needed change (Michael Allen talked about using sketches as lightweight prototypes at the conference, and first drawing the last activity the user engaged in) thinking about creating a learning experience that develops the learner’s capability. Finally, I’d be inclined to use ADDIE as a checklist to ensure all the important components are considered, once I’d drafted an initial design (or several). ADDIE certainly may be useful in taking that design forward, through development, implementation and evaluation.
Summary
I think ADDIE falls apart most in the initial analysis, not being broad enough, and in the design process: e.g. most ID processes neglect the emotional side of the equation, despite the availability of Keller’s ARCS model (which wasn’t even in the TIP database!). Good users, like Steve, take responsibility for reframing it practically, but I’m not confident that even a majority of ADDIE use is so enabled. Consequently, I worry that ADDIE is more detrimental than good. It ensures the minimum, but it essentially prevents inspiration.
I’m willing to be wrong, but I’ve been looking at the debate on both sides for a long time. While I know that PowerPoint doesn’t kill people, people kill people, and the same is true of ADDIE, the continued reliance on it is problematic. We probably need a replacement, one that starts with a broader analysis, and then provides guidance across job aid development, course development and more, that has at core iterative and situated design, informed by the recognition of the emotional nature of human use. Anyone have one to hand? Thoughts on the above?
Learning Tools
Owing to sins in my past, I not only am speaking on mobile learning at the eLearning Guild’s Learning Solutions conference e-Learning Foundations Intensive session, but also introduced the tools section. The tools will be covered by smart folks like Patti Shank, Harry Mellon, Steve Foreman, and Karen Hyder, but I was supposed to set the context.
Now, I talked about a number of things, including vendors, total cost of ownership, tradeoffs, and the development process, but I also included the following diagram attempting to capture the layers of systems that support tools, and both formal and informal. In some ways the distinctions I make are arbitrary (not to say abstract :), but still, I intended this to be a useful characterization of the space:
The point here is that on top of the hardware and systems are applications. There are assets (with media tools) you create that can (and should) be managed, and then they’re aggregated into content whether courses or resources, that are accessible through synchronous or asynchronous courses or games, portals or feeds, and managed whether through an LMS or a Social Networking System.
The graphic was hard to see on the screen (mea culpa), so I’ve reproduced it here. Does this make sense?
The GPS and EPSS
It’s not unknown for me to enter my name into a drawing for something, if I don’t mind what they’re doing with it. It’s almost unknown, however, for me to actually win, but that’s actually the case a month or so ago when I put a comment on a blog prior to the MacWorld show, and won a copy of Navigon turn-by-turn navigation software for my iPhone. I’d thought a dedicated one might be better, though I’d have to carry two devices, but if I moved from an iPhone to Droid or Pre I’d suffer. But for free…
When I used to travel more (and that’s starting again), I’ve usually managed to get by with Google Maps: put in my desired location (so glad they finally put copy/paste in, such a no-brainer rather than have to write it elsewhere and type it on, or remember, usually imperfectly). In general, maps are a great cognitive augment, a tool we’ve developed to be very useful. And I’m pretty good with directions (thankfully), so when a trip went awry it wasn’t too bad. (Though upper New Jersey…well, it can get scary.) Still, I’d been thinking seriously about getting a GPS, and then I won one!
And I’m happy to report that Navigon is pretty darn cool. At first the audio was too faint, but then I found out that upping the iPod volume (?) worked. (And then it didn’t the last time, at all, with no explanation I can find. Wish it used the darn volume buttons. We’ll see next time. ) However, it does a fabulous job of displaying where you are, what’s coming up, and recalculating if you’ve made a mistake. It’s a battery hog, keeping the device on all the time, but that’s why we have charging holders (which I’d already acquired for long trips and music). It also takes up memory, keeping the maps onboard the device (handy if you’re in an area with bad network coverage), but that’s not a problem for me.
However, my point here is not to extol the virtues of a GPS, but instead to use them as a model for some optimum performance support, as an EPSS (Electronic Performance Support System). There’s a problem with maps in a real-time performance situation. This goes back to my contention that the major role of mlearning is accessorizing our brain. Memorizing a map of a strange place is not something our brains do well. We can point to the right address, and in familiar places choose between good roads, but the cognitive overhead is too high for a path of many turns in unfamiliar territories. To augment the challenge, the task is ‘real time’, in that you’re driving and have to make decisions within a limited window of recognition. Also, your attention has to be largely outside the vehicle, directed towards the environment. And to cap it all of, the conditions can be dark, and visibility obscured by inclement weather. All told, navigation can be challenging.
While the optimal solution is a map-equipped partner sitting ‘shot-gun’, a GPS has been designed to be the next best thing (and in some ways superior). It has the maps, knows the goal, and often more about certain peculiarities of the environment than a map-equipped but similarly novice partner. A GPS also typically does not get it’s attention distracted when it should be navigating. It can provide voice assistance while you’re driving, so you don’t need to look at the device when your attention needs to be on the road, but at safe moments it can display useful guidance about lanes to be in (and avoid) visually, without requiring much screen real estate.
And that’s a powerful model to generalize from: what is the task, what are our strengths and limitations, and what is the right distribution of task between device and individual? What information can a device glean from the immediate and networked environment, from the user, and then provide the user, either onboard or networked? How can it adapt to a changing state, and continue to guide performance?
Many years ago, Don Norman talked about how you could sit in pretty much any car and know how to drive it, since the interface had time to evolve to a standard. The GPS has similarly evolved in capabilities to a useful standard. However, the more we know about how our brains work, the more we can predetermine what sort of support is likely to be useful. Which isn’t to say that we still won’t need to trial and refine, and use good principles of design across the board, interface, information architecture, minimalism, and more. We can, and should, be thinking about meeting organizational performance, not just learning needs. Memorizing maps isn’t necessarily going to be as useful as having a map, and knowing how to read it. What is the right breakdown between human and tool in your world, for the individuals you want to perform to their best? What’s their EPSS?
And on a personal note, it’s nice to have the mobile learning manuscript draft put to bed, and be able to get back into blogging and more. A touch of the flu has delayed my ability to think again, but now I’m ready to go. And off I go to the Learning Solutions conference in Orlando, to talk mobile, deeper learning, and more. The conference will both interfere with blogging and provide fodder as well. If you’re there, please do say hello.
Some accumulated thoughts…
I have had my head down cranking out the manuscript for my mobile learning book. The deadline for the first draft is breathing down my neck, and I’ve been quite busy with some client work as well. The proverbial one-armed paper hanger comes to mind.
However, that does not mean my mind has been idle. Far from, actually. It’s just not been possible to find the time to do the thoughts justice. I’m not really going to here, either, but I do want to toss out some recent thoughts and see what resonates with you, so these are mini-blogs (not microblogging):
A level above
I have long argued that we don’t use mental models enough in our learning, and also that we focus too much on knowledge and not enough on skills. As I think about developing learning, I want to equip learners to be able to regenerate the approach they should be using if they forget some part of it, and can if they have been given a conceptual model as relationships that guide the application to a problem.
I realize I want to go further, however. Given the rate of change of things these days, and the need to empower learners to go beyond just what is presented (moving from training to education, in a sense), I think we need to go further to facilitate the transition from ‘dependent’ learning to independent and interdependent learning, as my colleague Harold Jarche so nicely puts it.
To do that, I think we need to take our presentation of the model a little bit further. I think we need to look at, as a goal, having presented the learning in such a way that our learners understand the concept not only to regenerate, but maintain, extend, and self-improve. Yes, it is some extra work, but I think that is going to be critical. It will not only be the role of the university (despite Father Guido), but also the workplace. It’s not quite clear what that means practically, but I definitely want to put this stake into the ground to start thinking about it. What are your thoughts?
More on the iPad and the Publishing marketplace
I’ve already posted on the iPad, but I want to go on a little longer. First, the good news: OmniGroup has announced that they’ll be porting OmniGraffle (and their other apps) to the iPad. Yay! I *really* like their diagramming tool (where do you think I come up with all those graphics?).
On the other hand, I had lunch the other day with Joe Miller, who is the VP of Tech for Linden Labs. He recently was talking about the iPad and really sees it as a game changer in ways that are subtle and insightful. As we talked, he really feels that the whole Flash thing is a big mistake: that one of the things you would use the iPad for is surfing the web, and that more than 75% of the web runs Flash. It does seem like a relatively small thing to let hang up a major play.
Further, as I said earlier, I think interactivity is the major opportunity for publishers to go beyond the textbook on eReaders, and the iPad could lead the way. But right now, Flash is the lingua franca of interactivity on the web, and without it, there’s not an obvious fallback that won’t require rewriting across platforms instead of write-once, run anywhere.
Joe did point me to an interesting new eReader proposal, by Ray Kurzweil of all people. Oddly, it’s Windows-only, so not quite sure the relevance to the Mac (tho’ you’d think they’d port it over with alacrity), but a free, more powerful eReader platform could have a big impact.
Lots of more interesting things on the way, after I get this draft off to the publisher and get back into the regular blogging swing. ‘Til then, take care, and keep up the dialog!
eLearning Learning
Just to note that Learnlets is now part of the blogs recorded in Tony Karrer’s eLearning Learning. Tony’s made an architecture that allows blogs and articles on a particular topic to be aggregated and searched.
As part of a Personal Learning Network for those in elearning, such a searchable repository is quite useful. I used it as a recommended resource for the upcoming Foundations Intensive elearning introduction event as part of the Learning Solutions conference.
You can search on a topic, or just use the keywords on the left. You can see what’s new in the center, and the blogs trolled on the right.
Tony’s been quite active in looking out for new ways technology can serve the elearning community, and it’s nice to be a part of one of his solutions.
