Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Virtual Worlds #lrnchat

31 July 2009 by Clark 4 Comments

In last night’s #lrnchat, the topic was virtual worlds (VWs).   This was largely because several of the organizers had recently attended one or another of the SRI/ADL meetings on the topic, but also because one of the organizers (@KoreenOlbrish) is majorly active in the business of virtual worlds for learning through her company Tandem Learning.   It was a lively session, as always.

The first question to be addressed was whether virtual worlds had been over or underhyped.   The question isn’t one or the other, of course.   Some felt underhyped, as there’s great potential. Others thought they’d been overhyped, as there’s lots of noise, but few real examples.   Both are true, of course.   Everyone pretty much derided the presentation of powerpoints in Second Life, however (and rightly so!).

The second question explored when and where virtual worlds make sense.   Others echoed my prevailing view that VW’s are best for inherently 3D and social environments.   Some interesting nuances came in exploring the thought that that 3D doesn’t have to be our scale, but we can do micro or macro 3D explorations as well, and not just distance, but also time. Imagine exploring a slowed down, expanded version of a chemical reaction with an expert chemist!   Another good idea was for contextualized role plays.   Have to agree with that one.

Barriers were explored, and of course value propositions and technical issues ruled the day. Making the case is one problem (a Forrester report was cited that says enterprises do not yet get VWs), and the technical (and cognitive) overhead is another.   I wasn’t the only one who mentioned standards.

Another interesting challenge was the lack of experience in designing learning in such environments.   It’s still new days, I’ll suggest, and a lot of what’s being done is reproductions of other activities in the new environment (the classic problem: initial uses of new technology mirror old technology).   I suggested that we’ve principles (what good learning is and what VW affordances are) that should guide us to new applications without having to have that ‘reproduction’ stage.

I should note that having principles does not preclude new opportunities coming from experimentation, and I laud such initiatives.   I’ve opined before that it’s an extension of the principles from Engaging Learning combined with social learning, both areas I’ve experience in, so I’m hoping to find a chance to really get into it, too.

The third question explored what lessons can be learned from social media to enhance appropriate adoption of VWs.   Comments included that they needed to be more accessible and reliable, that they’ll take nurturing, and that they’ll have to be affordable.

As always, the lrnchat was lively, fun, and informative.   If you haven’t tried one, I encourage to at least take it for a trial run. It’s not for everyone, but some admitted to it being an addiction! ;)   You can find out more at the #lrnchat site.

For those who are interested in more about VWs, I want to mention that there will be a virtual world event here in Northern California September 23-24, the 3D Training, Learning, & Collaboration conference.   In addition to Koreen, people like Eilif Trondsen, & Tony O’Driscoll (who has a forthcoming book with Karl Kapp on VW learning) will be speaking,   and companies like IBM and ThinkBalm are represented, so it should be a good thing. I hope to go (and pointing to it may make that happen, full disclaimer :).   If you go, let me know!

Standards and success

20 July 2009 by Clark Leave a Comment

Apparently, Google has recently opined that the future of mobile is web standards.   While this is wonderfully vindicating, I think there’s something more important going on here, as it plays out for a broader spectrum than just mobile.

I’ve been reflecting on the benefits that standards have provided.   What worked for networks was the standardization on TCP/IP as a protocol for packet transmission.   What worked for email was standardization on the SMTP protocol.   HTTP standardization has been good for the web, where it’s been implemented properly! What’s been a barrier are inconsistent implementations of web standards, like Microsoft’s non-standard versions of HTML for browsers and Java.

The source of the standard may be by committee, or by the originator.   Microsoft’s done well for itself with the Office suite of applications, and by opening up the XML version, they’re benefiting while not doing harm.   They own the space, and everyone has to at least read and write their format to have any credibility. While IMS & IEEE held meetings to get learning content standards nailed down, ADL just put their foot down with SCORM (and US Defense is a big foot), and it pretty much got everyone’s attention.   But it’s having standards that matters.   The fact that Blu-ray finally won the battle has really opened up the market for high definition video!

On the other hand, keeping proprietary standards has hindered development.   At the recent VW talks hosted by SRI, one of the topics was the inability to transfer a character between platforms.   That’s good for the providers, but bad for the development of the field.   Eventually, one format will emerge, but it may take committees, or it may be that someone like Linden Labs will own the space sufficiently that everyone will lock into a format they provide. Until then, any investment has trouble being leveraged in a longer term picture, as the companies you go with may not survive!   There’s an old saying about how wonderful standards are because there are so many of them.   The problem is when they’re around the same thing!   I was regaling a colleague with the time I smoked (er, caused to burn up, not lighting up!) an interface card by trying to connect two computers to exchange data. One manufacturer had, contrary to the standard, decided to put 12 volts on a particular pin!

And, unfortunately, in the mobile space, the major providers here in the US want to lock you into their walled garden, as opposed to, say, Europe, where all the phones have pretty much the same abilities to access data.   This has been a barrier to development of services.   The web is increasingly powerful, with HTML5, and so while some things won’t work, web-based applications are defaulting to the lingua franca for not just content exchange but interactive activities.   The US is embarrassingly behind, despite the leading platforms (iPhone, Pre, etc).

In one sense this is sad that we can’t do better, but at least it’s good to have the web as a fallback now.   We can make progress when it doesn’t matter what device, or OS, you’re using, as long as you can connect.   The real news is that there is a lingua franca for mobile that you can use, so really there aren’t any reasons to hold off any longer.   Ellen Wagner sees a tipping point, and I’m pleased to agree.   There may be barriers for enterprise adoption, but as I frequently say: it’s   not the technology, the barriers are between our ears (and maybe our pocketbooks :).

Update: forgot my own punchline.   Standards need to be, or at least become, open and extensible for real progress to be made.   When others can leverage, the greatest innovations can occur.

Standards are hard work, but the benefits for progress are huge.   This holds true in your organization, as well.   Are you paying attention to standards you should be using, and what you should standardize yourself?

Mining Social Media

15 July 2009 by Clark Leave a Comment

One of the proposed benefits of social media is the capture of knowledge that’s shared, taking the tacit and making it explicit.   But really, how do we do this?   I think we need to separate out the real from the ideal.

The underlying premise is that we have an enlightened organization that’s empowering collaboration, communication, problem-solving, innovation, etc (what I’m beginning to term ‘inspiration’ in all senses of the word) by providing a social media infrastructure, learning scaffolding, and a supportive culture.   Now, all these people are sharing, but are we, and can we be, leveraging that knowledge?

The obvious first answer is that by sharing it with others, it’s being leveraged.   If information is shared with the relevant people, it’s been captured for organizational use by being spread appropriately.   That’s great, and far too few organizations are facilitating this in a systematic way.   However, I’m always looking for the optimal outcome: not just the best that is seen, but the best that can be. So how can we go further?

The typical response is using data mining that focuses on semantic content: systematically parsing the discussions, and using powerful semantic tools to attempt to capture, characterize, and leverage information systemically. (Hmm, you could map out the knowledge propositions, and link them into coherent chains and then track those over time to see significant changes, even regularly re-sort to see if different perspectives are changing…oh, sorry, got carried away, enough adaptive system designing :).

In terms of social media systems, while there are analytics available, semantics are not part of it, as far as I can see.   Further, I searched on social media mining, and found out that the first international workshop will be happening in November, but it’s not happened yet. There’s an interesting PhD thesis on the topic from UMaryland, but it’s focused on blogs and recommendations. In other words, it’s not ready for prime time.

The point is, that machine learning and knowledge mining mechanisms are in our future, but not our present.   Don’t get me wrong, there are huge possibilities and opportunities here, but they’re a ways off.   So, are we back to the best that can be?   I want to suggest one other possibility.   The systemic mechanisms are nice because, set up properly, they run regardless, but there’s another approach, and that’s human processing.   For all the advances in technology, our brains are still pretty much the most practical semantic pattern matching engines going.   So how would that work?

Well, let’s go back to the role that learning professionals play. We’ve already looked at how they could change as learning units take over responsibility for the broader picture of learning in the organization.   Learning professionals need to be nurturing social learning, and that means being in there, monitoring discussions for opportunities to draw out other members, spark useful feedback, develop skills, and more.

Well, they also can and should be looking for outcomes that could be redesigned/redeveloped/reproduced for broader dissemination.   They should be monitoring what’s happening and looking for information that’s worth culling out and distilling into something that’ll really bring out the impact of that information. Turning information into knowledge and even wisdom!

Yes, that’s a greater responsibility (though it’s also fun; you shouldn’t be in the learning space if you don’t love learning!).   It’s a new skill set, but I’ve already argued that.   The world’s changing, and the status quo won’t last long anyway.   So, while you can just allow and hope that individuals will perceive the value of the information created, and even facilitate by encouraging people to participate in all the relevant communities (which will likely cross role, product/service, and more), there’s a step further that’s to the benefit of the organization and the learners.

We’ll steadily build support for that process, but it will be facilitated, and advanced, by individual practice to complement, supplement, and inform the mechanistic approaches.   Don’t ignore this role; plan for it, prepare for it, and skill for it.   Responsibility for recognizing should be shared, so that the individuals in the network are also doing it (for example, retweeting valuable information), and that’s a learning skill that should be developed.

Here’s hoping you find this valuable!

Beyond Web 2.0

7 July 2009 by Clark 3 Comments

In preparing for a talk I’m going to give, I was thinking about how to represent the trends from web 1.0 through 2.0 to 3.0.   As I’ve mentioned before, in my mind 3.0 is the semantic web. I think of web 2.0 as really two things, the social read-write user-generated content web, and the web-services mashup web.   In elearning, we tend to focus on the former, but the latter is equally important.

Web2.0However, if we think about web 2.0 as user-generated content, we can think about 1.0 as producer-generated content.   The original web was what people savvy enough (whether tech or biz) could get up on the web.   The new web is where it’s easy for anyone to get content up, through blogs, photo-, video-, and slide-sharing sites, and more.

Extending that, what’s web 3.0 going to be?   If we take the semantic web concept, the reason we add these tags is for systems to start being able to use search and rules to find and custom-deliver content.   An extension, however, is to have the system generate the necessary content (cf Wolfram|Alpha).   In a sense, by knowing some things about you and your interests, needs, and activities, a system could proactively choose what and when to deliver information.

And that, to me, is really system-generated content, and a real opportunity.   It’s not ahead of what we can do (though I recognize it’s ahead of where most are ready to be; why do you think it’s called Quinnovation? :), but it’s certainly something to keep on your radar.   And when you’re ready, so am I!

Rethinking Virtual Worlds

24 June 2009 by Clark 3 Comments

I guess I have a visceral aversion to hype, because my initial reaction to ‘buzz’ is focusing in on the core affordances and disparaging mistaken uses of a new technology.  However, I do eventually open to taking advantage of the affordances in new ways. Case in point: learning styles.  I pointed out the flaws in the thinking several times, and then rethought them (without removing my previous views, I looked for the positive opportunities).  Now, preparing for a presentation, I’m rethinking some of my stances on learning in virtual worlds.

I’ve previously opined that there are two key affordances in virtual worlds: the spatial and the social, and that the technical overheads mean that unless there’s a long term relationship, the associated costs really argue that you should be hitting both.  I’m not changing that, but I was wondering what we might do if we did try to leverage those key affordances deliberately to support learning.

Taking a slightly cheeky approach, and quite willing to discredit presenting powerpoint presentations ‘in world’, I’ve tried to think through some subordinary, ordinary, and potentially extraordinary approaches to learning in a virtual world.  That is, opening learners up both cognitively and emotionally, presenting concepts, having examples available, creating meaningful practice, and scaffolding reflection.  What might we do?

Starting with pedagogy, I think a standard instructional design (read: presentations) is clearly subordinary.  An ordinary pedagogy might be a problem-based approach, but a really extraordinary approach might be to create a full immersive storyline in which the problem is embedded, turning it into a game world: a World of LearnCraft.  The idea is to mimic more closely the urgency typically felt when applying the knowledge in the real world (where it counts) by creating a similarly meaningful storyline to develop the associated motivation.  Then embedding resources in the story would scaffold the learning.  Of course, what I’m really talking about is game design ;).

Working with concepts, just presenting them is subordinary. Ordinary would be having them explorable, mapping them out in space, maybe with a scavenger hunt asking learners to find answers to questions embodied in the model.  A truly extraordinary approach would be to have the learners co-create the concept representation, using the collaborative creation capability available at least in Second Life.

Just having a poster for an example seems subordinary.  Having an example ‘gallery’, where you can examine the problem, the approach, and the results would be an ordinarily good approach. Ideally, the example could have the conceptual model layered on top of the decisions, mapping them to represent how th concept played out in context.  Beyond that, however, having the example be truly exploratory, where you could make certain decisions and see how they play out, and being able to backtrack (particularly with annotation about the mistakes the original team made) would be really extraordinary.

Practice is where we can and should be looking to games.  While having a quiz would be truly subordinary (if not maniacally mistaken), having a problem to solve ‘in world’ would be an ordinary approach. Again, having the problem be situated in a storyline, as the overall pedagogy, would be truly meaningful.  It’s easiest if the task is inherently spatial and social, but we certainly can benefit from the immersion, and building in social learning components can lead to powerful outcomes.

I’m somewhat concerned about trying to make reflection ‘in world’, because it’s inherently an ‘immediate’ environment.  It’s synchronous, and it’s been documented where normally reflective kids can go all ‘twitch’ in a digital environment.  It may be that reflection is ‘best’ when kept out of the world.  But for the sake of argument, let’s consider external reflection to be subordinary, and consider what might be ordinary and extraordinary.  Surely, having an ‘in-world’ but ‘post-experience’ discussion would be the ordinary approach.  Again, co-creating a representation of the underlying model guiding performance would be a really powerful reflective opportunity.

You still want to make some very basic learning decisions about virtual worlds.  If you don’t have an inherent expectation that there’s a long-term relationship with the world, the technical and learning overheads to facility in using the world would clearly suggest that you should seriously ensure that the payoff is worth it (like if the learning outcome is inherently spatial and social) and otherwise consider alternatives.  After that, you want to ensure that you’ve got meaningful practice.  That’s your assessment component, and you do want them applying the knowledge.  I suppose you could have the world be for concepts and examples, and have practice in some other format, but I admit I’m not sure why.  Around the practice, figure out how to embed concept and example resources. Finally, seriously reflect on how you support reflection for your learners.

Serious learning can and does happen in virtual worlds, but to make it happen systematically is a matter of design, not just the platform.  Fair enough?

Virtual Worlds & SCORM

10 June 2009 by Clark Leave a Comment

I was invited (thanks, Eilif!) to attend SRI’s workshop for ADL on SCORM and Virtual Worlds (VW) today.   I furiously tweeted it (check out the #adlvw hashtag), but now it’s time for reflections.   Represented were a number of people from various VW vendors (at least Qwaq, Second Life, Thinking Worlds), as well as SRI and ADL folks, and Avron Barr representing LETSI.

In case you don’t know, SCORM was developed to be a way to support interoperable content for learning.   However, the demands have grown. Beyond interaction, there’s a desire to have assessment reportable back to an LMS, and as our digital content resources grow larger, to address data quantities that go beyond download.   Angelo Panar from ADL   helped us understand that there are myriad ways that SCORM doesn’t scale well to handle things other than stand-alone objects. Peter Smith from ADL emphasized the importance of game-based learning, and the potential of VWs for meaningful learning.

Ron Edmonds from SRI nicely summarized the intersection: SCORM is standardized and interoperable, VWs are in competition and have vastly different models. The question is, what is the relationship between the two? Eilif Trondsen nicely characterized the situation that learning spans a gap from formal to informal.   SCORM’s highly focused (as of now) on asynchronous independent learner experience, but VWs are about social interaction, and are platforms, where learning experiences can be built.

The questions they were trying to answer were how to design learning experiences and measure/assess them, and then to decide what role SCORM plays.   It occurred to me that there are no unique issues to VWs except the social, so one particular solution is that the problems for SCORM and social media need resolving, and then can be ported to VWs without requiring a unique VW solution.

Another issue is the level of granularity.   If you design a collaborative exercise, and the interaction and collaborative response to reflection questions are what is key for the learning, then it’s a very different situation than when the goal is tightly constrained responses to very specific situations, e.g. the difference between training and education.   Back to the continuum Eilif was talking about, it seems to me that we can match the level of definition of the measure to the desired outcome (duh!).   However, SCORM has trouble with free-f0rm responses, so we get into some issues there.

The obvious ‘easy’ answer is to have SCORM just be a mechanism to introduce existing content objects ‘in world’.   That’s what a number of platforms have done, whether having SCORM objects appear as objects, or an embedded browser presents them.   A more complex alternative is to have an instructor or the learner respond via a custom interface with a response that’s relayed to an LMS using SCORM protocols. But can we go further?

I’ve argued in the past that social interactions should be a design feature only if the learning objective includes social components.   However, I also pointed out today that the VW may only be part of the solution, and when we look at the broader picture of the learning experience, we may well wrap reflection outside the world.   So then our learning model needs to include more than just content presentation, and we start veering off to Educational Modeling Language and the IMS Learning Design specification, which really isn’t yet a part of SCORM (but arguably should be).

Really, our learning categorization has to include activities as broad as mentoring, coached real performance, and social interaction, as well as content exposure, and interactive activities. It needs to span VWs, social media, and more.   It’s about developing learners richly, not just presenting a prix fixe menu.

I’m mindful of the conversation I had with Adam Nelson from Linden Labs (one of many fruitful conversations at the breaks that helped frame the thoughts above), and I asked whether his role for enterprise learning applications included my broad view of learning, that it’s not just about formal learning, or, worse, just ‘training’, but includes mentoring, discussions, all the way to expert collaboration.   That’s not necessarily what we need to track, but we do need to see the results, to look for opportunities (adding value as facilitators, not just content producers).

It’s clear that ‘in world’, we can have the equivalents of most social media, e.g. collaborative persistent spaces with representations and annotations are a richer form of wiki.   A shared element was the ‘overhead’ in virtual worlds, so the question is whether the affordances of virtual worlds are worth the investment.   I still believe that’s an issue of whether the domain/task is inherently 3D and/or that this is a long-term relationship so the investment is amortized.   There are lots of factors.   Still, it’s an intriguing idea to think that we will be able to interact, communicate, and collaborate in technology-augmented ways that aren’t possible in the real world. Of course, we’ll be able to do those in the real world too, largely, via ARGs (as I previously commented on the connections).

There’s a broad gap between what our tools enable, and what standards are ready to support.   The ultimate question was what the role of ADL would be.   I reckon it’s early days for VWs, so the role in this regard is, to me, track what’s happening and look for patterns that can be extracted and codified for ways to add value.

It’s the wild west or a goldrush right now, and the outcome is still to be decided.   However, the learning potential is, quite frankly, awesome, so it’s an exciting time.   Here’s to adventure!

A wee bit o’ experience…

11 March 2009 by Clark 1 Comment

A personal reflection, read if you’d like a little insight into what I do, why and what I’ve done.

Reading an article in Game Developer about some of the Bay Area history of the video game industry has made me reflective.   As an undergrad (back before there really were programs in instructional technology) I saw the link between computers and learning, and it’s been my life ever since.   I designed my own major, and got to be part of a project where we used email to conduct classroom discussion, in 1978!

Having called all around the country to find a job doing computers and learning,   I arrived in the Bay Area as a ‘wet behind the ears’ uni graduate to design and program ‘educational’ computer games.   I liked it; I said my job was making computers sing and dance.   I was responsible for FaceMaker, Creature Creator, and Spellicopter (among others) back in 81-82.   (So, I’ve been designing ‘serious games’, though these were pretty un-serious, for getting close to 30 years!)

I watched the first Silicon Valley gold rush, as the success of the first few home computers and software had every snake oil salesman promising that they could do it too.   The crash inevitably happened, and while some good companies managed to emerge out of the ashes, some were trashed as well.   Still, it was an exciting time, with real innovation happening (and lots of it in games; in addition to the first ‘drag and drop’ showing up in Bill Budge’s Pinball Construction Set, I put windows into FaceMaker!).

I went back to grad school for a PhD in applied cog sci (with Don Norman), because I had questions about how best to design learning (and I’d always been an AI groupie :).   I did a relatively straightforward thesis, not technical but focused on training meta-cognitive skills, a persistent (and, I argue, important) interest.   I looked at all forms of learning; not just cognitive but behavioral, ID, constructivist, connectionist, social, even machine learning.   I was also getting steeped in applying cognitive science to the design of systems, and of course hanging around the latest/coolest tech.   On the side, I worked part-time at San Diego State University’s Center for Research on Mathematics and Science Education working with Kathy Fischer and her application SemNet.

My next stop was the University of Pittsburgh’s Learning Research & Development Center for a post-doctoral fellowship working on a project about mental models of science through manipulable systems, and on the side I designed a game that exercised my dissertation research on analogy (and published on it).   This was around 1990, so I’d put a pretty good stake in the ground about computer games for deep thinking.

In 1991 I headed to the Antipodes, taking up a faculty position at UNSW in the School of Computer Science, teaching interface design, but quickly getting into learning technology again.   I was asked, and I supervised a project designing a game to help kids (who grow up without parents) learn to live on their own. This was a very serious game (these kids can die because they don’t know how to be independent), around 1993.   As soon as I found out about CGIs (the first ‘state’-maintaining technology) we ported it to the web (circa 1995), where you can still play it (the tech’s old, but the design’s still relevant).

I did a couple other game-related projects, but also experimented in several other areas.   For one, as a result of looking at design processes,   I supervised the development of a web-based performance support system for usability, as well as meta-cognitive training and some adaptive learning stuff.

I joined a government-sponsored initiative on online learning, determining how to run an internet university, but the initiative lost out to politics.   I jumped to another, and got involved in developing an online course that was too far ahead of the market (this would be about 1996-1997).   The design was lean, engaging, and challenging, I believe (I shared responsibility), and they’re looking at resurrecting it now, more than 10 years later!   I returned to the US to lead an R&D project developing an intelligent learning system based on learning objects that adapted on learner characteristics (hence my strong opinions on learning styles), which we got up and running in 2001 before that gold rush went bust.   Since then, I’ve been an independent consultant.

It’s been interesting watching the excitement around serious games.   Starting with Prensky, and then Aldrich, Gee, and now a deluge, there’s been a growing awareness and interest; now there are multiple conferences on the topics, and new initiatives all the time.   The folks in it now bring new sensibilities, and it’s nice to see that the potential is finally being realized. While I’ve not been in the thick of it, I’ve quietly continued to work, think, and write on the issue (thanks to clients, my book, and the eLearning Guild‘s research reports).   Fortunately, I’ve kept from being pigeonholed, and have been allowed to explore and be active in other areas, like mobile, advanced design, performance support, content models, and strategy.

The nice thing about my background is that it generalizes to many relevant tasks: usability and user experience design and information design are just two, in addition to the work I cited, so I can play in many relevant places, and not only keep up with but also generate new ideas.   My early technology experience and geeky curiosity keeps me up on the capabilities of the new tools, and allows me to quickly determine their fundamental learning capabilities.   Working on real projects, meeting real needs, and ability to abstract to the larger picture has given me the ability to add value across a range of areas and needs.   I find that I’m able to quickly come in and identify opportunities for improvement, pretty much without exception, at levels from products, through processes, to strategy.   And I’m less liable to succumb to fads, perhaps because I’ve seen so many of them.

I’m incredibly lucky and grateful to be able to work in the field that is my passion, and still getting to work on cool and cutting edge projects, adding value.   You’ll keep seeing me do so, and if you’ve an appetite for pushing the boundaries, give me a holler!

Whither the library?

10 March 2009 by Clark 3 Comments

I go to libraries, and check out books.   I admit it, when there’s a lot I want to read, I’d rather read it on paper (at 1200 dpi) versus on the screen.   And some recent debates have got me thinking about libraries in general, public and university.   There’re some issues that are unresolved, but leave me curious.

As the editor on one for-profit journal (British Journal of Education Technology), and now on one ‘open access’ (Impact: Journal of Applied Research in Workplace E-learning), I’ve been thinking more about the role of the journal, and the library.   There’s certainly been a lively discussion going on about the internet and the role of for-profit publishers.

The model for decades has been that books, magazine, journals, and newspapers had material that was submitted, reviewed, edited, and published by publishers, and available for a fee.   Yes, there have been some free newspapers, paid for by advertising (e.g. San Diego’s weekly Reader was an eagerly sought resource while I was a student), but in general the costs of paper, publishing, distribution, and more meant that information had an associated overhead.

Libraries democratized access, by aggregating purchasing power.   People could come in and find material on particular subjects, read popular books, and more recently, also other materials like albums, tapes, CDs, DVDs, etc.   Public libraries provided places to read as well, and librarians were resources to find or ask about particular topics.   University libraries purchased journals, copies of textbooks, and of course the obvious reference materials, while providing places to study.

Now, of course, the internet has thrown all that on it’s head.   With some notable exceptions, people have the capability to put up information (e.g. this blog), to access information (Google becoming a verb), and the distribution is covered in the cost of internet access.   Consequently, the publishers have struggled to come to grips with this.   As have researchers and learners.   On one side, those who say what’s on the internet isn’t vetted, while others say that the proprietary information is irrelevant and the wisdom of the crowds reigns supreme.

One of the consequences has been the call for open access publishing, essentially   that articles are submitted, reviewed, and published online, with anyone being able to view the outcomes.   This is a threat to publishers, who’ve argued strongly that their processes are time-tested.   And universities (particularly for promotion and tenure) have been slow to accept online publication as an equivalent, due to the uncertainty of the rigor of the publication (clearly, it depends on the particular journal).

This isn’t restricted to journals, of course, textbooks are also under threat.   And publishers are similarly scrambling.   I’ve been advising publishers and working on projects to get them online, and more.   The ‘and more’ part is because I’ve been trying to tell them it’s not “it’s not about the book, it’s about the content:, but instead “it’s not about the content, it’s about the experience”.   Whether academic publishing will continue is an interesting issue.   Publisher’s who’ve depended on this have serious issues.   So do libraries.

Which brings me back to my library. It’s a vibrant place, by   no means dying.   While the book shelves are relatively quiet (though there are dedicated readers browsing the stacks), there are kids in the young book section, people grazing the videos and music, and a queue for access to the internet.   They’re tightly couple with other library networks, and so when a book I wanted wasn’t in our library system, they got it on loan from another library system in the state.   Easily!     They also have ways to make recommendations, even in areas they don’t read in themselves.

How about university libraries? They’re the ones I was curious about, and where I had some thoughts.   University libraries are more about research.   Popular culture will be distributed across media, and public libraries can have a role as a media access center, but university libraries are situated on internet rich campuses, where the demand for other popular media probably isn’t as strong.   Do they have a role?

I’ve argued before that the role of the university is shifting to developing 21st century skills (unfortunately in lieu of our public education systems).   The library is well-placed to accommodate this need. They may not be the technology gurus, but they are (or can be) the information gurus.     It’s a hub of information searching, evaluation, and sense-making.   The librarians may need a mind-set change to not be about finding resources, but teaching their information science skills, but no one’s untouched (teachers need to move to being learning mentors, etc).

I considered, but didn’t title this post “Wither the library”, because I think libraries have a role.   They may need to become shift their focus (and it occurs to me that we need to think about how they become more visual), but they still have a role.

Tools and tradeoffs

28 January 2009 by Clark 2 Comments

originalquinnovationsite1
Old Site

I’ve been busy updating my website.   The previous version was done by hand in an old version of Adobe’s DreamWeaver, and while it was very light and minimal, it wasn’t very ‘elegant’.   For instance, I’d had one problem that really bugged me, hadn’t been able to fix (though recently I managed to beat it into submission).   I had several options: continue to maintain it, pay someone to do a better job, or find some tool that makes it easy to make reasonable sites.   I got my mitts on a copy of RealMac’s RapidWeaver, and started to play around.

RapidWeaver uses templates: there are quite a few included, and you can pay for more.   I wasn’t completely happy with any, but by systematic exploration (aka messing around), I managed to make one I was happy with. (Recognize that the small size of the screenshots can make the old one look plausible, but it was a bit space-wasting; e.g. it’s still readable at 50%!)   I haven’t dived into the actual design behind the themes, as that takes me somewhere I don’t want to go.   Still, when I’d find things I thought it couldn’t do, I’d look deeper and find it.   It took quite a few attempts to get things the way I liked them, but it’s mostly quite clean.     Yes, I could delve into CSS and PHP and really get a handle on it, but that’s not the best investment of my time, and I could’ve stuck with DreamWeaver.   It’s enough that I understand what they do, without getting into the syntax of a specific site.

newquinnovationsite
New Site

The interesting thing to consider here, however, are the tradeoffs.   I wanted a decent starting point, and the application handling all the background work when I changed things around (like maintaining the navigation bar, adding the cookie crumbs, etc).   I didn’t want to have to tweak everything myself. If I were a professional web designer, I’d want power tools; if I were an amateur I’d want hand-holding.   As it is, I want something in-between.   RapidWeaver does a relatively elegant job of providing simplicity upfront but letting you open up the hood and mess about inside.   I had to get deep into the program to get done some things I wanted to get done, but it’s output is better than I was getting on my own.   Note that if you use it’s built-in ‘text and image’ pages, I don’t like how it looks.   I went to HTML pages (which I can handle).

The more general lesson is that there are no right answers, only tradeoffs.   Ideally, you get more power as you take on more learning.   Andrea diSessa termed this ‘incremental advantage’, where well-designed tool environments give you more power as a direct outcome of your willingness to explore.   HyperCard had this, as you could start with just draw tools, but then explore fields, buttons, and backgrounds (before you hit the ‘HyperTalk’ programming language wall).

There’s been notable progress in providing power tools (though too many people don’t even know about the concept of ‘styles’), but there’s still a pretty linear relationship between learning and power.   For example, as I have mentioned before, everyone wants the full game development tool that doesn’t require programming, though I argue it can’t exist.   It’s nice (and all too rare) when you get an elegant segue from templates through to being able to open up the underpinnings.

Understanding the tradeoff between ease of use and power is important in bringing knowledge, information, and tools to your learners, as well as your own learning tools.   You’ll want good defaults, and then the ability to customize.   Some of our tools are still not doing a good job of that, and the tutorials still tend to be focused on either product features or rote procedures, instead of helping you understand the software model underneath.   We could do a lot better!

Back to your user goals: you’ve got to know what you’re trying to do, how much you’re willing to learn about it, and live within what that gives you.   And I’d like feedback on the new website.     Put on your ‘potential customer’ goggles, prepared with what you’d want to know, and have a look; I welcome feedback to improve it!

DevLearn 08 Keynote: Tim O’Reilly

12 November 2008 by Clark 8 Comments

Tim O’Reilly, Web 2.0 guru, talked to us about what web 2.0 is and led us to his implications for what we do.   He started off talking about tracking the ‘alpha geek’.   These are the folks who manage to thrive and innovate despite us, rather than because of us.   He’s essentially built O’Reilly on watching what these folks do, analyzing the underlying patterns, and figuring out what’s key.

He talked about the stories that Web 2.0 is about open source, or social, were surface   takes, and by looking at leading companies, e.g. Google, there was something else going on. It’s not just user-generated content, but mining user-generated data for value, and then adding value on top of it.   “Data is the intel inside.”

This led him to key competencies going forward being machine learning, statistics, and design.   It isn’t about well-structured data, but about finding the nuggets in messy data.   And it is about design as an “architecture of participation” that gets users to act in the ways you’d like.

His take home message was six points that boil down to watching your alpha geeks, and use them to help guide what you should be doing, to help others achieve their potential.   An inspiring message in a very geek-cred way :).

I concept-mapped it:

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok