Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Augmented Reality Lives!

20 July 2017 by Clark Leave a Comment

Visually Augmented RealityAugmented Reality (AR) is on the upswing, and I think this is a good thing. I think AR makes sense, and it’s nice to see both solid tool support and real use cases emerging.  Here’s the news, but first, a brief overview of why I like AR.

As I’ve noted before, our brains are powerful, but flawed.  As with any architecture, any one choice will end up with tradeoffs. And we’ve traded off detail for pattern-matching.  And, technology is the opposite: it’s hard to get technology to do pattern matching, but it’s really good at rote. Together, they’re even more powerful. The goal is to most appropriately augment our intellect with technology to create a symbiosis where the whole is greater than the sum of the parts.

Which is why I like AR: it’s about annotating the world with information, which augments it to our benefit.  It’s contextual, that is, doing things  because  of when and where we are.  AR augments sensorily, either auditory or visual (or kinesthetic, e.g. vibration).  Auditory and kinesthetic annotation is relatively easy; devices generate sounds or vibrations (think GPS: “turn left here”).  Non-coordinated visual information, information that’s not overlaid visually, is presented as either graphics or text (think Yelp: maps and distances to nearby options).  Tools already exist to do this, e.g. ARIS.  However, arguably the most compelling and interesting is the aligned visuals.

Google Glass was a really interesting experiment, and it’s back.  The devices – glasses with camera and projector that can present information on the glass – were available, but didn’t do much because of where you were looking. There were generic heads-up displays and camera, but little alignment between what was seen and what was consequently presented to the user with additional information.  That’s changed. Google Glass has a new Enterprise Edition, and it’s being used to meet real needs and generate real outcomes. Glasses are supporting accurate placement in manufacturing situations requiring careful placement.  The necessary components and steps are being highlighted on screen, and reducing errors and speeding up outcomes.

And Apple has released it’s Augmented Reality software toolkit, ARKit, with features to make AR easy.  One interesting aspect is built-in machine learning, which could make aligning with objects in the world easy!  Incompatible platforms and standards impede progress, but with Google and Apple creating tools for each of their platforms, development can be accelerated. (I hope to find out more at the eLearning Guild’s Realities 360 conference.)

While I think Virtual Reality (VR) has an important role to play for deep learning, I think contextual support can be a great support for extending learning (particularly personalization), as well as performance support.  That’s why I’m excited about AR. My vision has been that we’ll have a personal coaching system that will know where and when we are and what our goals are, and be able to facilitate our learning and success. Tools like these will make it easier than ever.

FocusOn Learning reflections

27 June 2017 by Clark Leave a Comment

If you follow this blog (and you should :), it was pretty obvious that I was at the FocusOn Learning conference in San Diego last week (previous 2 posts were mindmaps of the keynotes). And it was fun as always.  Here are my reflections on what happened a bit more, as an exercise in meta-learning.

There were three themes to the conference: mobile, games, and video.  I’m pretty active in the first two (two books on the former, one on the latter), and the last is related to things I care and talk about.  The focus led to some interesting outcomes: some folks were very interested in just one of the topics, while others were looking a bit more broadly.  Whether that’s good or not depends on your perspective, I guess.

Mobile was present, happily, and continues to evolve.  People are still talking about courses on a phone, but more folks were talking about extending the learning.  Some of it was pretty dumb – just content or flash cards as learning augmentation – but there were interesting applications. Importantly, there was a growing awareness about performance support as a sensible approach.  It’s nice to see the field mature.

For games, there were positive and negative signs.  The good news is that games are being more fully understood in terms of their role in learning, e.g. deep practice.  The bad news is that there’s still a lot of interest in gamification without a concomitant awareness of the important distinctions. Tarting up drill-and-kill with PBL (points, badges, and leaderboards; the new acronym apparently)  isn’t worth significant interest!  We know how to drill things that must be, but our focus  should be on intrinsic interest.

As a side note, the demise of Flash has left us without a good game development environment. Flash is both a development environment and a delivery platform. As a development environment  Flash had a low learning threshold, and yet could be used to build complex games.  As a delivery platform, however, it’s woefully insecure (so much so that it’s been proscribed in most browsers). The fact that Adobe couldn’t be bothered to generate acceptable HTML5 out of the development environment, and let it languish, leaves the market open for another accessible tool. And Unity or Unreal provide good support (as I understand it), but still require coding.  So we’re not at an easily accessible place. Oh, for HyperCard!

Most of the video interest was either in technical issues (how to get quality and/or on the cheap), but a lot of interest was also in interactive video. I think branching video is a real powerful learning environment for contextualized decision making.  As a consequence the advent of tools that make it easier is to be lauded. An interesting session with the wise Joe Ganci (@elearningjoe) and a GoAnimate guy talked about when to use video versus animation, which largely seemed to reflect my view (confirmation bias ;) that it’s about whether you want more context (video) or concept (animation). Of course, it was also about the cost of production and the need for fidelity (video more than animation in both cases).

There was a lot of interest in VR, which crossed over between video and games.  Which is interesting because it’s not inherently tied to games or video!  In short,  it’s a delivery technology.  You can do branching scenarios, full game engine delivery, or just video in VR. The visuals can be generated as video or from digital models. There was some awareness, e.g. fun was made of the idea of presenting powerpoint in VR (just like 2nd Life ;).

I did an ecosystem presentation that contextualized all three (video, games, mobile) in the bigger picture, and also drew upon their cognitive and then L&D roles. I also deconstructed the game Fluxx (a really fun game with an interesting ‘twist’). Overall, it was a good conference (and nice to be in San Diego, one of my ‘homes’).

Tech and School Problems

14 June 2017 by Clark Leave a Comment

After yesterday’s rant about problems in local schools, I was presented with a recent New York Times article. In it, they talked about how the tech industry was getting involved in schools. And while the initiatives seem largely well-intentioned, they’re off target.   There’s a lack of awareness of what meaningful learning is, and what meaningful outcomes could and should be.  And so it’s time to shed a little clarity.

Tech in schools is nothing new, from the early days of Apple and Microsoft vying to provide school computers and getting a leg up on learners’ future tech choices.  Now, however, the big providers have even more relative leverage. School funds continue to be cut, and the size of the tech companies has grown relative to society. So there’s a lot of potential leverage.

One of the claims in the article is that the tech companies are able to do what they want, and this  is a concern. They can dangle dollars and technology as bait and get approval to do some interesting and challenging things.

However, some of the approaches have issues beyond the political:

One approach is to teach computer science to every student.  The question is: is this worth it?  Understanding what computers do well (and easily), and perhaps more importantly what they don’t, is necessary, no argument. The argument for computer programming is that it teaches you to break down problems and design solutions. But is computer science necessary?  Could it be done with, say, design thinking?  Again, all for helping learners acquire good problem-solving skills.  But I’m not convinced that this is necessarily a good idea (as beneficial as it is to the tech industry ;).

Another initiative is using algorithms, rules like the ones that Facebook uses to choose what ads to show you, to sequence math.  A program, ALEKS, already did this, but this one mixes in gamification. And I think it’s patching a bad solution. For one, it appears to be using the existing curriculum, which is broken (too much rote abilities, too little transferable skills).  And gamification?  Can’t we,  please, try to make math intrinsically interesting by making it useful?  Abstract problems don’t help.  Drilling key skills is good, but there are nuances in the details.

A second approach has students choosing the problems they work on, and teachers being facilitators.  Of course, I’m a fan of this; I’ve advocated for gradually handing off control of learning to learners, to facilitate their development of self-learning. And in a recently-misrepresented announcement, Finland is moving to topics with interleaved skills rapped around them (e.g. not one curricula, but you might intersect math and chemistry in studying ecosystems. However, this takes teachers with skills across both domains, and the ability to facilitate discussion  around projects.  That’s a big ask, and has been a barrier to many worthwhile initiatives.   Compounding this is that the end of a unit is assessed by a 10-point multiple choice question.  I worry about the design of those assessments.

I’m all for school reform. As Mark Warschauer put it, the only things wrong with American education is the curriculum, the pedagogy, and the way we use technology.  I think the pedagogy being funded in the latter description is a good approach, but there are details that need to be worked out to make it a scalable success.  And while problem-solving is a good curricular goal, we need to be thoughtful about how we build it in. Further, motivation is an important component about learning, but intrinsic or extrinsic?

We really could stand to have a deeper debate about learning and how technology can facilitate it. The question is: how do we make that happen?

Evil design?

6 June 2017 by Clark 1 Comment

This is a rant, but it’s coupled with lessons.  

I’ve been away, and one side effect was a lack of internet bandwidth at the residence.  In the first day I’d used up a fifth of the allocation for the whole time (> 5 days)!  So, I determined to do all I could to cut my internet usage while away from the office.  The consequences of that have been heinous, and  on the principle of “it’s ok to lose, but don’t lose the lesson”, I want to share what I learned.  I don’t think it was evil, but it well could’ve been, and in other instances it might be.

So, to start, I’m an Apple fan.  It started when I followed the developments at Xerox with SmallTalk and the Alto as an outgrowth of Alan Kay‘s Dynabook work. Then the Apple Lisa was announced, and I knew this was the path I was interested in. I did my graduate study in a lab that was focused on usability, and my advisor was consulting to Apple, so when the Mac came out I finally justified a computer to write my PhD thesis on. And over the years, while they’ve made mistakes (canceling HyperCard), I’ve enjoyed their focus on making me more productive. So when I say that they’ve driven me to almost homicidal fury, I want you to understand how extreme that is!

I’d turned on iCloud, Apple’s cloud-based storage.  Innocently, I’d ticked the ‘desktop/documents’ syncing (don’t).  Now, with  every other such system that I know of, it’s stored locally *and* duplicated on the cloud.  That is, it’s a backup. That was my mental model.  And that model was reinforced:  I’d been able to access my files even when offline.  So, worried about the bandwidth of syncing to the cloud, I turned it off.

When I did, there was a warning that  said something to the effect of: “you’ll lose your desktop/documents”.  And, I admit, I didn’t interpret that literally (see: model, above).  I figured it would disconnect their syncing. Or I’d lose the cloud version. Because, who would actually steal the files from your hard drive, right?

Well, Apple DID!  Gone. With an option to have them transferred, but….

I turned it back on, but didn’t want to not have internet, so I turned it off again but ticked the box that said to copy the files to my hard drive.  COPY BACK MY OWN @##$%^& FILES!  (See fury, above.)   Of course, it started, and then said “finishing”.  For 5 days!  And I could see that my files weren’t coming back in any meaningful rate. But there was work  to do!

The support  guy I reached had some suggestion that really didn’t work. I did try to drag my entire documents folder from the iCloud drive to my hard drive, but it said it was making the estimate of how long, and hung on that for a day and a half.  Not helpful.

In meantime, I started copying over the files I needed to do work. And continuing to generate the new ones that reflected what I was working on.  Which meant that the folders in the cloud, and the ones on my hard drive that I  had  copied over, weren’t in sync any longer.  And I have a  lot of folders in my documents folder.  Writing, diagrams, client files, lots of important information!

I admit I made some decisions in my panic that weren’t optimal.  However, after returning I called Apple again, and they admitted that I’d have to manually copy stuff back.  This has taken hours of my time, and hours yet to go!

Lessons learned

So, there are several learnings from this.  First, this is bad design. It’s frankly evil to take someone’s hard drive files after making it easy to establish the initial relationship.  Now, I don’t  think Apple’s intention was to hurt me this way, they just made a bad decision (I hope; an argument could be made that this was of the “lock them in and then jack them up” variety, but that’s contrary to most of their policies so I discount it).  Others, however,  do make these decisions (e.g. providers of internet and cable from whom you can only get a 1 or 2  year price which will then ramp up  and unless you remember to check/change, you’ll end up paying them more than you should until you get around to noticing and doing something about it).  Caveat emptor.

Second, models are important and can be used for or against you. We do  create models about how things work and use evidence to convince ourselves of their validity (with a bit of confirmation bias). The learning lesson is to provide good models.  The warning is to check your models when there’s a financial stake that could take advantage of them for someone else’s gain!

And the importance of models for working and performing is clear. Helping people get good models is an important boost to successful performance!  They’re not necessarily easy to find (experts don’t have access to 70% of what they do), but there are ways to develop them, and you’ll be improving your outcomes if you do.

Finally, until Apple changes their policy, if you’re a Mac and iCloud user I  strongly recommend you avoid the iCloud option to include Desktop and Documents in the cloud unless you can guarantee that you won’t have a bandwidth blockage.  I like the idea of backing my documents to the cloud, but not when I can’t turn it off without losing files. It’s a bad policy that has unexpected consequences to user expectations, and frankly violates my rights to  my data.

We now return you to our regularly scheduled blog topics.

 

Some new elearning companies ;)

23 May 2017 by Clark 1 Comment

As I continue to track what’s happening, I get the opportunity to review a wide number of products and services. While tracking them all would be a full-time job, occasionally some offer new ideas.  Here’s a collection of those that have piqued my interest of late:

Sisters eLearning: these folks are taking a kinder, gentler  approach to their products and marketing their services.  Their signature offering is  a suite of templates for your elearning featuring cooperative play.  Their approach in their custom development is quiet and classy. This  is reflected in the way they  promote themselves at conferences: they all wear mauve  polos  and sing beautiful  a capella.  Instead of giveaways, they  quietly provide free home-baked mini-muffins for all.

Yalms: these folks are offering  the ‘post-LMS’. It’s not an LMS, and  instead offers course management, hosting, and tracking.  It addresses compliance, and checks a whole suite of boxes such as media portals, social, and many non-LMS things including xAPI. Don’t confuse them with an LMS; they’re beyond that!

MicroBrain: this company has developed a system that makes it easy to take  your existing courses and chunk  them  up into little bits. Then it pushes them out on a  schedule.  It’s a serendipity model, where there’s a chance it just might be the right bit at the right time, which is certainly better than your existing elearning. Most  importantly, it’s mobile!

OffDevPeeps: these folks a full suite of technology development services  including mobile, AR, VR, micro, macro, long, short, and anything else you want, all done at a competitive  cost. If you  are focused on the ‘fast’ and ‘cheap’ side of the trilogy, these are the folks to talk to. Coming soon to an inbox  near you!

DanceDanceLearn: provides a completely unique offering. They have developed an authoring tool that makes it easy for you to animate dancers moving in precise formations that spell out content. They also have a synchronized swimming version.  Your content can be even more engaging!

There, I hope you’ll find these of interest, and consider checking them out.

Any relation between the companies portrayed and real entities is purely coincidental.  #couldntstopmyself #allinfun

Disruptive Innovation

18 May 2017 by Clark Leave a Comment

I recently came across a document  (PDF) about disruptive innovation based upon Clayton Christensen’s models, which I’d heard about but hadn’t really penetrated. This one was presented around higher education innovation (a topic I’ve  some familiarity with ;), so it provided a good basis for me to explore the story.  It had some interesting features that are worth portraying, and then some implications for my thoughts on innovation, so I thought I’d share.

The model’s premise is that disruption requires two major things: a technology enabler and a business model innovation.  That is, there has to be a way to deliver this new advance, and it has to be coupled with a way to capitalize on the benefits.  It can’t just be a new technology in an existing business model, as that’s merely the traditional competitive innovation. Similarly, a new business model around existing technology is still within  competitive advancement.

A related requirement is to have a new entity ready to capitalize. This quote captured me: “In those few instances in which the leader in one generation became the leader in the next disruptive one, the company did so by setting up a completely autonomous business unit…”  You can’t do disruption from inside the game.  Even if you’re a player, you have to liberate resources to start anew.

Which is quite different than most innovation. Typical innovation is ‘within the box’.  This comes from having an environment where people can experiment, share, be exposed to new ideas, and allowing it to incubate (ferment/percolate) over time.  And this is a good thing. Disruptive innovation makes new industries, new companies, etc.  And that’s also good (except, perhaps, for the disrupted).  The point being that both innovations are valuable, but different.

It’s not clear to me what happens when an internal innovation comes up with an idea that’s really disruptive. Clearly, if the idea  clears the hurdles of complacency and inertia, you’d probably want to spin it off.  But most innovations just need a fair airing and trialing to get traction (though depending on scope, a bit of change management might be useful).

I encourage innovation, and creating the environment where it can happen. It’s valuable even in established businesses, and a fair bit is known about how to create an environment where it can flourish.  So, what can we innovate about innovation?

To LMS or not to LMS

3 May 2017 by Clark 5 Comments

A colleague recently asked (in general, not me specifically) whether there’s a role for LMS functions. Her query was about the value of having a place to see (recommended) courses, to track your development, etc. And that led me to ponder, and here’s my thinking:

My question is  where to draw the line. Should you do social learning in the LMS version of that, or have a separate system? If using the LMS for social around courses (a good thing), how do you handle the handoff to the social tool used for teams and communities?  It would seem to make sense to use the regular tool in the courses as well, to make it part of the habit.

Similarly, should you host non-course resources in the LMS  or out in a portal (which is employee-focused, not siloed)? Maybe the courses also make more sense in the portal, tracked with xAPI?  I think I’d like to track self-learning, via accessing videos and documents the same as I would formal learning with courses: I want to be able to correlate them with business to test the outputs of experiments in changes.

Again, how should I be handling signups for things?  I handle signups for all sorts of things via tools like Eventbrite.  Is asking to signup for a training, with a waiting list, different than other events such as a team party?

Now, for representing your learning, is that an LMS role, or an LRS dashboard, or…?  From a broader perspective, is it talent management or performance management or…?

I’m not saying an LMS doesn’t make sense, but it seems like it’s a minor tool at best, not the central organizing function.  I get that it’s not a learning management system, but a course management system, but is that the right metaphor?  Do we want a learning tracking system instead, and is that what an LMS if or could be for?

When we start making a continuum between formal and informal learning, what’s the right suite of tools? I want to find courses and other things through a federated search of *all* resources. And I want to track many things besides course completions, because those courses should have real world-related assignments, so they’re tracked as work, not learning. Or both. And I want to track things that we’re developing through coaching, or continuing development through coaching and stretch assignments. Is that an LMS, or…?

I have no agenda  to put the LMS out of business, as long as it makes sense in modern workplace learning. However, we  want to use the right tool for the right job, and create an ecosystem that supports us doing the right thing.    I don’t have an obvious answer, I’m just trying on a rethink (yes, thinking out loud ;), and wondering what your thoughts are.  So, what is the right way to think about this? Do you see a uniquely valuable aggregation of services that makes sense? (And I may have to dig in deeper and think about the essential components and map them out, then we can determine what the right suites of functions  are  to fulfill those needs.)

Human Learning is Not About to Change Forever

26 April 2017 by Clark 1 Comment

In my inbox was an announcement about a new white paper with the intriguing title  Human Learning is About to Change Forever.  So naturally I gave up my personal details to download a copy.  There are nine claims in the paper, from the obvious to the ridiculous. So I thought I’d have some fun.

First, let’s get clear.  Our learning runs on our brain, our wetware. And that’s not changing in any fundamental way in the near future. As a famous article once had it: phenotypic plasticity triumphs over genotypic plasticity (in short, our human advantage has gained    via  our ability to adapt individually and learn from each other, not through  species evolution).   The latter takes a long time!

And as a starting premise, the “about to” bit implies these things are around the corner, so that’s going to be a bit of my critique. But nowhere near  all of it.  So here’s a digest of the  nine claims and my comments:

  1. Enhanced reality tools will transform the learning environment.  Well, these tools will  certainly augment the learning environment  (pun intended :). There’s evidence that VR leads to better learning outcomes, and I have high hopes for AR, too. Though is that a really fundamental transition? We’ve had VR and virtual worlds for over a decade at least.  And is VR a evolutionary or revolutionary change from simulations? Then they go on to talk about performance support. Is that transforming learning? I’m on record saying contextualized learning (e.g. AR) is the real opportunity to do something interesting, and I’ll buy it, but we’re a long way away. I’m all for AR and VR, but saying that it puts learning in the hands of the students is a design issue, not a technology issue.
  2. People will learn collaboratively, no matter where they are.  Um, yes, and…?  They’re already doing this, and we’ve been social learners for as long as we’ve existed. The possibilities in virtual worlds to collaboratively create in 3D I still think is potentially cool, but even as the technology limitations come down, the cognitive limitations remain. I’m big on social learning, but mediating it through technology strikes me as just a natural step, not transformation.
  3. AI will banish intellectual tedium. Everything is  awesome.  Now we’re getting a wee bit hypish. The fact that software can parse text and create questions is pretty impressive. And questions about semantic knowledge aren’t going to transform education. Whether the questions are developed by hand, or by machine, they aren’t likely on their own to lead to new abilities to do. And AI is not yet to the level (nor will it be soon) where it can take content and create compelling activities that will drive learners to apply knowledge and make it meaningful.
  4. We will maximize our mental potential with wearables and neural implants. Ok, now we’re getting confused and a wee bit silly. Wearables are cool, and in cases where they can sense things about you and the world means they can start doing some very interesting AR. But transformative? This still seems like a push.  And neural implants?  I don’t like surgery, and messing with my nervous system when you still don’t really understand it? No thanks.  There’s a lot more to it than managing to adjust firing to control limbs. The issue is again about the semantics: if we’re not getting meaning, it’s not really fundamental. And given that our conscious representations are scattered across our cortex in rich patterns, this just isn’t happening soon (nor do I want that much connection; I don’t trust them not to ‘muck about’).
  5. Learning will be radically personalized.  Don’t you just love the use of superlatives?  This is in the realm of plausible, but as I mentioned before, it’s not worth it until we’re doing it on  top of good design.  Again, putting together wearables (read: context sensing) and personalization will lead to the ability to do transformative AR, but we’ll need a new design approach, more advanced sensors, and a lot more backend architecture and semantic work than we’re yet ready to apply.
  6. Grades and brand-name schools won‘t matter for employment.  Sure, that MIT degree is worthless! Ok, so there’s some movement this way.  That will actually be a nice state of affairs. It’d be good  if we started focusing on competencies, and build new brand names around real enablement. I’m not optimistic about the prospects, however. Look at how hard it is to change K12 education (the gap  between what’s known and what’s practiced hasn’t significantly diminished in the past decades). Market forces may change it, but the brand names will adapt too, once it becomes an economic necessity.
  7. Supplements will improve our mental performance.  Drink this and you’ll fly! Yeah, or crash.  There are ways I want to play with my brain chemistry, and ways I don’t. As an adult!  I really don’t want us playing with children, risking potential long-term damage, until we have a solid basis.  We’ve had chemicals support performance for a while (see military use), but we’re still in the infancy, and here I’m not sure our experiments with neurochemicals can surpass what evolution has given us, at least not without some pretty solid understanding.  This seems like long-term research, not near-term plausibility.
  8. Gene editing will give us better brains.  It’s  alive!  Yes, Frankenstein’s monster comes to mind here. I do believe it’s possible that we’ll be able to outdo evolution eventually, but I reckon there’s still not everything known about the human genome  or the human brain. This similarly strikes me as a valuable long term research area, but in the short term there are so many interesting gene interactions we don’t yet understand, I’d hate to risk the possible side-effects.
  9. We won‘t have to learn: we‘ll upload and download knowledge. Yeah, it’ll be  great!  See my comments above on neural implants: this isn’t yet ready for primetime.  More importantly, this is supremely dangerous. Do I trust what you say you’re making available for download?  Certainly not the case now with many things, including advertisements. Think about downloading to your computer: not just spam ads, but viruses and malware.  No thank you!  Not that I think it’s close, but I’m not convinced we can ‘upgrade our operating system’ anyway. Given the way that our knowledge is distributed, the notion of changing it with anything less than practice seems implausible.

Overall, this is reads like more a sci-fi fan’s dreams than a realistic assessment of what we should be preparing for.  No, human learning isn’t going to change forever.  The ways we learn, e.g. the tools we learn with are changing, and we’re rediscovering how we really learn.

There are better guides available to what’s coming in the near term that we should prepare for.  Again, we need to focus on good learning design, and leveraging technology in ways that align with how our brains work, not trying to meld the two.  So, there’re my opinions, I welcome yours.

Top 10 Tools for @C4LPT 2017

19 April 2017 by Clark Leave a Comment

Jane Hart is running her annual Top 100 Tools for Learning poll  (you can vote too), and here’s my contribution for this year.  These  are my personal learning tools, and are ordered  according to Harold Jarche’s Seek-Sense-Share models, as ways to find answers, to process them, and to share for feedback:

  1. Google Search is my go-to tool when I come across something I haven’t heard of. I typically will choose the Wikipedia link if there is one, but also will typically open several other links and peruse across them to generate a broader perspective.
  2. I use GoodReader on my iPad to read PDFs and mark up journal submissions.  It’s handy for reading when I travel.
  3. Twitter  is one of several ways I keep track of what people are thinking about and looking at. I need to trim my list again, as it’s gotten pretty long, but I keep reminding myself it’s drinking from the firehose, not full consumption!  Of course, I share things there too.
  4. LinkedIn is another tool I use to see what’s happening (and occasionally engage in). I have a group for the Revolution,  which largely is me posting things but I do try to stir up conversations.  I also see and occasionally comment on posting by others.
  5. Skype  let’s me  stay in touch with my ITA colleagues, hence it’s definitely a learning tool. I also use it occasionally to have conversations with folks.
  6. Slack is another tool I use with some groups  to stay in touch. People share there, which makes it useful.
  7. OmniGraffle is my diagramming tool, and diagramming is a way I play with representing my understandings. I will put down some concepts in shapes, connect them, and tweak until I think I’ve captured what I believe. I also use it to mindmap keynotes.
  8. Word is a tool I use to play with words as another way to explore my thinking. I use outlines heavily and I haven’t found a better way to switch between outlines and prose. This is where things like articles, chapters, and books come from. At least until I find a better tool (haven’t really got my mind around Scrivener’s organization, though I’ve tried).
  9. WordPress is my blogging tool (what I’m using here),  and serves both as a thinking tool (if I write it out, it forces me to process it), but it’s also a share tool (obviously).
  10. Keynote is my presentation tool. It’s where I’ll noodle out ways to share my thinking. My presentations  may get rendered to Powerpoint eventually out of necessity, but it’s my creation and preferred presentation tool.

Those are my tools, now what are yours?  Use the link to let Jane know, her collection and analysis of the tools is always interesting.

Artificial Intelligence or Intelligence Augmentation

12 April 2017 by Clark Leave a Comment

In one of my networks, a recent conversation has been on Artificial Intelligence (AI) vs Intelligence Augmentation (IA). I’m a fan of both, but my focus is more on the IA side. It triggered some thoughts that I penned to them and thought I’d share here [notes to clarify inserted with square brackets like this]:

As context, I‘m an AI ‘groupie’, and was a grad student at UCSD when Rumelhart and McClelland were coming up with PDP (parallel distributed processing, aka connectionist or neural networks). I personally was a wee bit enamored of genetic algorithms, another form of machine learning (but a bit easier to extract semantics, or maybe just simpler for me to understand ;).

Ed Hutchins was talking about distributed cognition at the same time, and that remains a piece of my thinking about augmenting ourselves. We don‘t do it all in our heads, so what can be in the world and what has to be in the head?  [the IA bit, in the context of Doug Engelbart]

And yes, we were following fuzzy logic too (our school was definitely on the left-coast of AI ;).  Symbolic logic was considered passe‘! Maybe that‘s why Zadeh [progenitor of fuzzy logic] wasn‘t more prominent here (making formal logic probabilistic may have seemed like patching a bad core premise)?  And I managed (by hook and crook, courtesy of Don Norman ;) to attend an elite AI convocation held at an MIT retreat with folks like McCarthy, Dennett, Minsky, Feigenbaum, and other lights of both schools.  (I think Newell were there, but I can‘t state for certain.)  It was groupie heaven!

Similarly, it was the time of emergence of ‘situated cognition‘ too (a contentious debate with proponents like Greeno and even Bill Clancy while old school symbolics like Anderson and Simon argued to the contrary).  Which reminds me of Harnad‘s Symbol Grounding problem, a much meatier objection to real AI than Dreyfuss’ or the Chinese room concerns, in my opinion.

I do believe we ultimately  will achieve machine consciousness, but it‘s much further out than we think. We‘ll have to understand our own consciousness first, and that‘s going to be tough, MRI and other such research not withstanding. And it may mean simulating our cognitive architecture on a sensor equipped processor that must learn through experimentation and feedback as we do. e.g. taking a few years just to learn to speak! (“What would it take to build a baby” was a developmental psych assignment I foolishly attempted ;)

In the meantime, I agree with Roger Schank (I think he was at the retreat too) that most of what we‘re seeing, e.g. Watson, is just fast search, or pattern-learning. It‘s not really intelligent, even if it‘s doing it like we do (the pattern learning). It‘s useful, but it‘s not intelligent.

And, philosophically, I agree with those who have stated that we must own the responsibility to choose what we take on and what we outsource. I‘m all for self-driving vehicles, because the alternative is pretty bad (tho‘ could we do better in driver training or licensing, like in Germany?).  And I do want my doctor augmented by powerful rote operations that surpass our own abilities, and also by checklists and policies and procedures, anything that increases the likelihood of a good diagnosis and prescription.  But I want my human doctor in the loop.  We still haven‘t achieved the integration of separate pattern-matching, and exception handling, that our own cognitive processor provides.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.