Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

30 September 2008

What’s old is new again…

Clark @ 4:04 PM

When I was an undergraduate, I became excited about the connection between computers and learning.  My uni didn’t have a relevant degree back then, but I could design my own if I could get a faculty member to be my mentor.  I found Hugh Mehan and Jim Levin (very lucky on my part), and got to work on their experiment using email as an alternative to classroom discussion.  This was in 1978, and there was no internet, but we had the ARPAnet and off we went.

We found some interesting things, suchas that asynchronous responses were more thoughtful, compared to the IRE (inquire-response-evaluation) format of face to face.  And, messages could handle more than one topic at the same time. However, the overall dialog cycle took longer. Our results and some recommendations were published in 1983.

Imagine my surprise to hear an academic in an interview remark how he discovered that some folks who didn’t interact in the classroom, did find a voice in an online environment.  That was another of our findings, but only 20 years before this online learning expert got going.  I guess sometimes you can be too far ahead of the times…

That’s actually not to the academic’s discredit; it’s a reliable problem for interdisciplinary studies.  In HCI (interface design), you’d get someone from computer science opining about something new to them that was old hat in psychology, and vice versa.  Learning technology is the same way; bringing together techies, learning psychologists, and more, and it’s

I actually got quite a lot of mileage straddling the HCI and EdTech fields, as EdTech had lots to learn from some of the HCI work going on, such as iterative prototyping methods.  There was similarly valuable work going the other way, too, as I’d suggest that some of the more cutting edge psychological stuff (e.g. activity theory) was first explored by the ed community.

The problem is somewhat exacerbated by the different journals: there’s no one clearing house.  Back then we published in Instructional Science.  Now it might be BJET, or Education Technology, or ETRD.  The point being, it’s not easy to track what’s been done before.

So, what’s the point?  I reckon it’s to be eclectic and read broadly, look for inspiration everywhere you go, keep an open mind, go to lots of conferences (e.g. hope to see you at DevLearn) talk to lots of people, and actively looking for the application potential of new ideas.  At least it’s an exciting place to play!

24 September 2008

Free Web 2.0 Learning course!

Clark @ 3:15 PM

This is worth touting.  Michelle Martin, and Harold Jarche of Work Literacy, assisted by Tony Karrer, in conjunction with the eLearning Guild, are hosting a free  Web 2.0 workshop.  Spread over six weeks leading up to DevLearn, there’s a topic a week, and tasks to accomplish depending on your bandwidth, and a community, etc.

The more I explore web 2.0 applications for organizational learning (and innovation, execution, etc), the more opportunities I see.  The technologies are really a core part of the performance ecosystem, and I am increasingly excited about the possibilities.

I haven’t met Michelle (hope to at DevLearn), but know enough of the Guild, Tony, Harold, and her writings to be able to highly recommend this.  The price is right, the topic is essential, the crew is top-notch, how can you go wrong?

23 September 2008

WGU and online learning

Clark @ 5:01 PM

Today I had a chance to visit with Western Governors University.  Set up a more than a decade ago, it’s gradually grown to an enrollment of more than 10K students.  It’s purely online, but supported by 20 states, which gives it some interesting opportunities (read: political clout).

At the core of their model is the fact that their curricula are entirely competency-based.  They build their programs around specific outcomes (developed from an industry-based advisory board, whether the industry be IT or education), align assessments, and design the course materials towards those assessments.  It’s a refreshing focus on meaningful outcomes, beyond that which many programs claim, and they’ve been able to get accreditation on that basis (not despite it).  It also allows flexibility in schedule, and testing-out.

They’re also working on developing the social learning around it, both supporting content discussions and learning discussions.  They’ve got a goal of helping learners succeed, and to this end are pretty up-front about what it takes to succeed in a largely self-directed learning environment, despite the mentors. Still, it’s an ongoing learning process (the law of unanticipated consequences).

Which is not to say that they don’t face challenges.  They want to keep costs down, and not become a development house, so they’ve focused on sourcing the learning resources, and have largely been tied to what’s available off the shelf.  The learner experience in terms of the prepared materials could be enhanced from a motivational standpoint.  Also, it’s hard to develop and maintain a focus on higher-level learning objectives.  Further, the technology environment is a moving target that demands continual improvement.  They’re taking systematic steps to address these issues.

Overall, it’s an impressive endeavor, on both principled and practical grounds.  Robert Mendenhall, the President, set out to change the model after many years experience, believing that competencies and technology could provide a viable alternative to existing practices, and WGU is a testimony to his vision and ability to sell and deliver it.  A worthy challenge to the status quo.

22 September 2008

eLearning 3.0

Clark @ 6:01 AM

In preparing a presentation for an organization on the learning value of Web 2.0, I realized that the development I’m looking forward to is web 3.0 and the learning possibilities.  Don’t get me wrong, I’m very enthusiastic over the 2.0 learning opportunities, as I’ve gotten to know them.  It’s just that the work I was doing years ago now has the technology infrastructure to be brought to life in a viable and ubiquitous way.  What it means is personalized learning wrapped around your life, instead of leaving your day-to-day life to attend an ‘event’ or self-directed searching.

Web 3.0 and beyond

The key here starts with the next generation of the web, the semantic Web.  What this is about, to me, is the use of tags and meta-data to start adding meaning to the information out there.  To date, we’ve separated form from content, but the machine can’t operate on the data independently.  If we had semantics, meaning, through tags and meta-data, the system can start trolling for content. And, of course, we can start auto-tagging based upon content and generation as well as making it part of our habits (e.g. as I try to remember to categorize my blog posts).  The point is, with this information, we can start connecting things.  This isn’t just about search, but about pro-active and opportunistic information delivery, and moving to the distributed learning model I’ve talked about before.

A second opportunity is Web 3.0’s service-oriented architectures (SOA), or rather web-oriented architectures (WOA).  This is where capabilities are separated out into separate network-delivered services with API’s that anyone can tap if they have the proper codes (and authority).  What this does is let you build applications in a light-weight way, cobbling together the capabilities you need into the services you want.

What does this mean for learning?  It means that you can tag learning content and make it available.  Then you can have a system that looks at your learning goals, and your current activity (through a variety of context-sensitive mechanisms), and pull in a small tidbit opportunistically, or connect you with just the right person afterward.  The point is to move from macro courses to micro-learns, where you might be prepped right before an important task, supported in the middle of it, and provided reflection afterwards.  So your performance situations become learning situations.

To do this effectively means linking the meaning of your current activity with the status of your learning goals, and putting together an effective delivery mechanism depending on your technology infrastructure, preferences, etc.  The goal is to make a system that’s like having a personal mentor, but much more affordably.

Now, don’t get me wrong. While this is doable, it’s quite far off, and won’t be easy.  It depends on several developments, such some reasonable work on standardizing on terminology (or a successful implementation of folksonomies) for both content and tasks (and/or a very good mapping process).  It’ll also require some business model that makes it viable for participation on all parties.  Finally, it’ll require some tuning to make a user experience that’s effective without being intrusive.  Still, I think it’s a great future, and would love to have a well-implemented version coaching me!  How about you?

21 September 2008

Design: final heuristics

Clark @ 6:35 AM

Part 4 of the 4 part series:

Here’re the final suite of heuristics I came up with many years ago as a result of looking at our design process and the barriers our cognitive architecture can put in our own way.

Full Spectrum Design: One of the most insidious problems observed in educational multimedia is a tendency to incorporate all the solution into the computer.  The system will be the repository of all the text, sound, graphics, etc, and the instruction.  Unfortunately, this does not properly reflect what’s known about reading text on screen and the role of the teacher.  In conjunction with the No Limits Analysis, another way to get the best design is to consider the full spectrum of media, particularly considering delivering text on paper, having the instruction accomplished by an instructor, etc.  The proper use of the term multimedia is to consider all the available media and their use and to distribute the instructional task across all of them.

No Limits Analysis: After assembling a team, the first step in the design process is analysis, and an important component is proper information gathering to ensure that all relevant possible sources of inspiration have been considered.  However, before we consider what others have done in the same, we should see what we come up with when we think as if there were no limits.  This occurs after the pedagogical problem has been identified but before other examples are considered.  The step is to consider how the problem would be addressed if there were no technological limitations, as if anything could be accomplished as if by magic. Arthur C. Clarke said “any truly advanced technology is indistinguishable from magic”, and we’re really at the stage where the barriers are our imaginations, not the technology.  So stop and think what an ideal solution would be.  You may not be able to achieve what you imagine, but you certainly can’t if you don’t identify that option, and you’ll prematurely limit the solution space.

Kitchen Sink Analysis: After the No Limits Analysis, comes the systematic consideration of other corners of the design space or other relevant prototypes for modification.  Lewis & Reiman suggested that “plagiarism” is an appropriate design strategy (as far as your lawyers would let you, as they cautioned), where ideas are lifted from existing designs rather than reinventing the wheel.  An expansion to this concept is to not only look at what others have done but to also consider how instruction might proceeded without computer support, what theory suggests as an approach, etc.  In short, the suggestion is to exhaustively search all potential sources of input into the design process, including the proverbial ‘kitchen sink’.  This process is both to help populate the design space and discover all constraints.

And let me add one other that I didn’t explicitly include before:
Lateral input:  Research on brainstorming and creativity (cf D Bono), has shown that besides being systematic and covering all the known or plausible solutions, lateral thinking is valuable. After you’ve been exhaustive within the box, find ways to get ‘outside the box’.  Use random inspirations: play a game, doodle, get some random input!  Get silly!  It may not be politically correct anymore, but back when I worked for a learning game company, the CEO (hi, Sky!), used to bring in pizza and beer on Friday afternoon and have some idea sessions!  There are lots of tools and approaches, just make sure you make some concerted effort here.

OK, that’s it for this series on design.  I hope these past few posts give you some useful guidance or ideas.  I welcome your own!

20 September 2008

Design: the first heuristics

Clark @ 6:12 AM

Part 3 of a 4 part series:

We talked previously about our cognitive limitations.  Here I list a subset of the heuristics I’ve discovered across design practices and from experience.  I’m sure you’ll find some that seem obvious, and have others of your own. However, I still see instances where these principles would have helped, so I’m tossing them out there just in case.

Team Design: One problem is covering all the required areas of expertise.  As was discovered in the early days of multimedia, designers should recognize and acknowledge the spectrum of talent required to properly develop a project.  In particular, there should be expertise on the team for the content area, the educational design, the interface design, the programming environment, and each media area to be used.  The management style has to allow the contributions of these experts to the design, and to resolve any fundamental contradictions only upon assessment of each point of view.  The saying is that the room is smarter than the smartest person in the room, but my caveat is that is only true if you manage the process right.  But diversity helps, and you want to find a way to involve diverse viewpoints early on, so ensure a suite of talents on the design team.

Egoless Design: Hand in hand with the above is the requirement for egoless design.  Each team member has to feel comfortable contributing to the overall design and willing to offer and receive constructive criticism.  Egoless programming was the source of this approach, but it holds true fo all group endeavors. The members have to recognize and respect the contributions of each other.  Team leaders can facilitate this by displaying the same quality themselves.  For instance, the opportunity for deliberately silly ideas can support a willingness to take risks.  There should be explicit discussions of process as well as product, as this is likely to prove valuable in not only leading to a high quality of output but in leading to improved capabilities of the team members.

3 Strategies Design: While creativity in the design process can contribute to selecting a good design, all designs will benefit from testing and refinement of the design.  Three strategies from the field of ‘user-centered design’ can be used to characterize an appropriate approach.  This process is captured in the notion of iterative, formative, and situated design, where successive iterations of the design are tested with real data and real users and the results are used to further inform the design process.  It has been reliably demonstrated that single passes at design fail for reasons specifically related to lack of appropriate testing.  In short, the waterfall approach doesn’t work.  We’ll find questions we can’t answer on the basis of existing information and we’ll have differing opinions. Prototype and test!  Also…

(the Double) Double P’s: Once some designs are generated, it is necessary to start producing limited-capability versions, or prototypes (different from the use of the term for the evolutionary design model), for testing to feed back into the design process.  The prototypes that are created should be of increasing fidelity, but it is tempting to start prototyping them on the computer  However, this can lead to functional fixedness.  A colleague many years ago had a rule for his team that no programming should proceed until a complete ‘storyboard’ has been completed on paper.  This leads to a prescription for the Double P’s: Postponing Programming and Preferring Paper.  User experience work has found that paper can be extremely valuable for experimenting, prototyping, and testing.  Get your answers before you spend lots of money and time!

OK, one more list of heuristics coming up.  And I look forward to yours and your feedback.

19 September 2008

Design: our barriers

Clark @ 8:12 AM

Part 2 of a 4 part series:

Cognitive psychology has identified certain characteristics of our brains. When it comes to problem solving, (and we can think of design as problem-solving as well as search), we have certain behaviors that predispose us to certain types of solutions (in other words, prematurely limiting our search space). These include functional fixedness, set-effects, premature evaluation, and, not too surprisingly, social issues.

Functional fixedness is really just about not seeing new applications of a tool.  This can be seen when designers are familiar with a particular implementation tool or environment. The designs then tend to resemble what is easy to do with that tool (the old “when you have a hammer everything looks like a nail”).  Alternatively, they may push the tool beyond it’s effective range trying to accomplish a particular outcome.  If the tool is the optimal one for the job this is not a problem, but in other cases the tool is familiar and so it is used despite a lack of relevance.  Even in robust design environments, projects often resemble what is easy to implement in that environment, not what the analysis would indicate.

Set effects are solving new problems in ways appropriate for previous problems, whether or not that’s appropriate now.  In this approach, subjects who had solved several prior problems in a particular way would solve a new problem in that well-practiced way.  However, subjects who had not had that prior experience would find the simpler solution to the problem.  What was identified as “set effects” manifests itself in designers whose new solutions resemble prior solutions even when the old solutions are inappropriate.  While it is true that the consistency produced from set effects is often beneficial, that should be a conscious decision and not the accidental outcome of limitations of the designer or the design process.

Premature evaluation is where problem-solvers will settle on a solution before all possibilities have been considered.  In creativity, one of the hurdles has been when problem-solvers will pursue the first solution path that presents itself rather than delay solution while considering other potential solution paths that might be more effective.  The cognitive overhead in retaining a meta-level of strategy that considers multiple solution paths is often overwhelmed in the efforts to consider all the factors in the problem itself.  Considerable practice can be required to develop good strategic maintenance of strategies as well as solution.

A final set of problems in problem-solving are the social ones: the difficulty of publicly suggesting an idea that may not be accepted, the difficulty in sharing an idea that may not get credited, the difficulty in offering help due to a perception that it may be an intrusion or unwelcome, or the difficulty in accepting help from others whether to not be a burden or to resent intrusion.  Another set of social problems have to do with different domains of expertise.  Most learning technology projects these days require multiple skill sets: writing, media production, software engineering, learning design, management, etc.  Who gets listened to? How do you coordinate this?

I suspect you recognize these problems, but of course the issue is what to do. Coming up in the next two posts: Team Design, Egoless Design, No Limits Analysis, ‘Kitchen Sink’ Analysis, Systematic Creativity, 3 Strategy Design, the (double) Double P’s, and Full Spectrum Design.  What are your ideas?

18 September 2008

Design: design as search

Clark @ 8:46 AM

A four-part series on design, barriers, and some heuristics to improve outcomes:

I found my passion in learning technology.  I took most of a computer science degree (after flirting with biology, and…) before designing my own major combining CS with education.  I went back to grad school to learn a lot about cognition and learning.  I took a rather eclectic approach, going beyond cognitive approaches to include behavioral approaches, instructional design, social learning theories, constructivist learning theories, even machine learning.  (I continue that today.)

As I took very much an ‘applied’ approach (not doing pure research, but interpreting research to meet real problems, now known as “design-based research” in the edtech field, but close to the more general ‘action research‘), and was teaching interaction design, I started looking at design processes in that same eclectic way. (NB: I’ve also tracked ‘engagement’, as those who’ve read the book or heard me speak on games know).  I looked at interface design processes, and instructional design processes, heck, I looked at architectural, industrial, engineering, and graphic design processes. And I realized some things that I’ve talked about in various places but haven’t written about for over 10 years, and yet I think are still relevant.  I’m going to talk here about design, our cognitive (and other) barriers to design, and my plan is to subsequently post about a suite of heuristics I’ve come up with to minimize the consequences of those barriers.

Design can be characterized as a search of a ‘solution’ space. Think of all the possible solutions as this n-dimensional space or cloud, and outside the cloud are designs that wouldn’t solve the problem, like the old approach we were using; inside the cloud are possible solutions).  Sometimes we can evolve an existing design incrementally to solve the problem, and sometimes we combine previous solutions to create a new one (or so some theories say).  Another way to think about it is we have this infinite solution space, and then we start putting in constraints that limit the space (must cost less than $50K, must be doable in six weeks, must work in our technical environment, etc).  Constraints are actually good, as they limit the space we have to search.  However, too often when we consider all the constraints, we’ve just made the space the null set; we’ve excluded any solution. Then we have to relax constraints: reduce scope, increase budget, etc.

One of the problems is prematurely limiting our search. It turns out that our cognitive architecture has biases that may limit our search long before we consciously look at our options.  We may only search a limited space nearby our prior experience, and only achieve what’s know in search as a ‘local maximum’, as a good solution for part of the space but not the best solution overall (a ‘global maximum’). We need to know the barriers, and then propose solutions around those barriers.  Coming up: Functional Fixedness, Set-Effects, Premature Evaluation, and the ever-dreaded Social issues.  Stay tuned!

15 September 2008

Lack of skills

Clark @ 7:13 AM

One of the pervasive myths is that the new ‘digital natives’ are computer fluent.  However, I’m working on a project to address digital literacy skills where the expert says experience shows that students are rather naive; they have some skills, but maybe not efficient and effective ones, and are missing others.  It’s anecdotal, but fortunately, we’re beginning to get evidence that this isn’t the case.

Michele Martin points us to this announcement from the UK that documents robust problems in youth use of computers.  The study shows that students are not using tools effectively, and also are not evaluating information appropriately. Which shouldn’t be a surprise.  They’re not getting well-structured instruction about it, and trusting to their own self-learning skills is known to not be effective, whether it’s the fact that pure exploratory environments don’t work (except for the small fraction of folks who are self-effective learners), or that people ineffectively self-evaluate.

As you might infer, this is true of individuals in the workplace as well, and Michele also points us to this (rather self-serving) piece by a company that trains on search skills, documenting the inefficiencies. Which makes the point that trusting to effective skills isn’t a fair expectation.

All of which, it occurs to me, makes the case yet again for the benefit of not just teaching work literacy skills, which I support, but also for learning to learn skills.  And the context of that, creating a learning culture.

12 September 2008

Schooling Scandal

Clark @ 2:41 PM

Ok, I’m outraged. No kids in our elementary school have qualified for GATE (Gifted & Talented), including my own.  That’s surprising, since we’ve some extremely smart kids.  There are several in each classroom who are selected for special recognition at the end of the year and everything.  The evaluation for GATE is the Otis-Lennon test, a reasonably well-regarded assessment of abstract thinking and reasoning ability.

So what do I find out?  That we’re not teaching the skills that are evaluated by this test!  We’re teaching a bunch of rote things, and not the skills that will be the differentiators in the coming years.  Thanks, NCLU  (No Child Left Untested).

Now, I realize that schools are hurting for money, and it’s a dire mess, politically and practically.  The responsibility goes right on up to the decision makers in DC.  So the school district is forced into playing funny games; apparently, the GATE money is going to teacher training, with the belief that this will help them inculcate these skills in the children.  However, that’s not working.

I don’t know how close my son came (my daughter was only tested yesterday for the first time), as they oddly don’t let us know the results (except not/qualify).  However, teachers are not able to take the time for teaching these skills, and they ought to be. Yes, kids need to learn to read and write and do mathematical reasoning, but they’re only getting science since a group duns money from the parents to add it in, and they’re not getting the early exposure to the reasoning skills in a systematic way.

I’m afraid to think that the school district doesn’t want any kids to pass, because then they’d have to do things for them.  Our curriculum’s broken in serious ways, and our politicians aren’t making it better.  We need to be teaching reasoning skills and abstract problem-solving (even practice for these tests).  Now, I need an action plan.

Next Page »

Powered by WordPress