Learnlets

Secondary

Clark Quinn’s Learnings about Learning

AI Reflections

15 September 2017 by Clark Leave a Comment

Last night I attended a session on “Our Relationship with AI” sponsored by the Computer History Museum and the Partnership on AI. In a panel format, noted journalist John Markoff moderated Apple‘s Tom Gruber, AAAI President Subbarao Kambhampati, and IBM Distinguished Research Scientist Francesca Rossi. The overarching theme was:  how are technologists, engineers, and organizations designing AI tools that enable people and devices to understand and work with each other?

It was an interesting session, with the conversation ranging from what AI is, to what it could and should be used for, and how to develop it in appropriate ways. Addresses were concerns about AI’s capability, roles, and potential misuses.  Here I’m presenting just a couple of thoughts triggered, as I’ve previously riffed on IA (Intelligence Augmentation) and Ethics.

One of the questions that arose was whether AI is engineering or science. The answer, of course, is both. There’s ongoing research on how to get AI to do meaningful things, which is the science part. Here we might see AI that can learn to play video games.  Applying what’s currently known to solve problems is the engineering part, like making chatbots that can answer customer service questions.

On a related note was what  can AI do.  Put very simply, the proposal was that AI could do what you can make a judgment on in a second. So, whether what you see is a face, or whether a claim is likely to be fraudulent.  If you can provide a good (large) training set that says ‘here’s the input, and this is what the output should be’, you can train a system to do it.  Or, in a well-defined domain, you can say ‘here are the logical rules for how to proceed’, and build that system.

The ability to do these tasks, was another point, is what leads to fear. “Wow, they can be better than me at this task, how soon will they be better than me on many tasks?”  The important point made is that these systems can’t generalize beyond their data or rules.  They can’t say: ‘oh I played this video driving game so now I can drive a car’.

Which means that the goal of artificial  general intelligence, that is, a system that can learn and reason about the real world, is still an unknown distance away.  It would either have to have a full set of  knowledge about the world,  or you’d have to have both the capacity and the experience that a human learns from (starting as a baby).  Neither approach has demonstrated any approach of being close.

A side issue was that of the datasets.  It turns out that datasets can have or learn implicit biases. A case study was mentioned how Asian faces triggered ‘blinking’ warnings, owing to the typical eye shape. And this was from an Asian company!  Similarly, word recognition ended up biasing woman towards associations with kitchens and homes, compared to men.  This raises a big issue when it comes to making decisions: could loan-offerings, fraud-detection, or other applications of machine learning inherit bias from datasets?  And if so, how do we address it?

Similarly, one issue was that of trust. When do we trust an AI algorithm?  One suggestions was that it would come through experience (repeatedly seeing benevolent decisions or support).  Which wouldn’t be that unusual. We might also employ techniques that work with humans: authority of the providers, credentials, testimonials, etc. One of my concerns then was could that be misleading: we trust one algorithm, and then transfer that trust (inappropriately) to another?  That wouldn’t be  unknown in human behavior either.  Do we need a whole new set of behaviors around NPCs? (Non Player Characters, a reference to game agents that are programmed, not people.)

One analogy that was raised was to the industrial age. We started replacing people with machines. Did that mean a whole bunch of people were suddenly out of work?  Or did that mean new jobs emerged to be filled?  Or, since we’re now doing human-type tasks, will there be less tasks overall? And if so, what do we do about it?  It clearly should be a conscious decision.

It’s clear that there are business benefits to AI. The real question, and this isn’t unique to AI but happens with all technologies, is how we decide to incorporate the opportunities into our systems. So, what do you think are the issues?

 

Ethics and AI

2 August 2017 by Clark 1 Comment

I had the opportunity to attend a special event pondering the ethical issues that surround Artificial Intelligence (AI).  Hosted by the Institute for the Future, we gathered in groups beforehand to generate questions that were used in a subsequent session. Vint Cerf, co-developer of the TCP/IP protocol that enabled the internet, currently at Google, responded to the questions.  Quite the heady experience!

The questions were quite varied. Our group looked at Values and Responsibilities. I asked whether that was for the developers or the AI itself. Our conclusion was that it had to be the developers first. We also considered what else has been done in technology ethics (e.g. diseases, nuclear weapons), and what is unique to AI.  A respondent mentioned an EU initiative to register all internet AIs; I didn’t have the chance to ask about policing and consequences.  Those strike me as concomitant issues!

One of the unique areas was ‘agency’, the ability for AI to  act.  This led to a discussion for a need to have oversight on AI decisions. However, I suggested that humans, if the AI was mostly right, would fatigue. So we pondered: could an AI monitor another AI?  I also thought that there’s evidence that consciousness is emergent, and so we’d need to keep the AIs from communicating. It was pointed out that the genie is already out of the bottle, with chatbots online. Vint suggests that our brain is layered pattern-matchers, so maybe consciousness is just the topmost layer.

One recourse is transparency, but it needs to be rigorous. Blockchain’s distributed transparency could be a model. Of course, one of the problems is that we can’t even explain our own cognition in all instances (we make stories that don’t always correlate with the evidence of what we do). And with machine learning, we may be making stories about what the system is using to analyze behaviors and make decisions, but it may not correlate.

Similarly, machine learning is very dependent on the training set. If we don’t pick the right inputs, we might miss some factors that would be important to incorporate in making answers.  Even if we have the right inputs, but don’t have a good training set of good and bad outcomes, we get biased decisions. It’s been said that what people are good at is crossing the silos, whereas the machines tend to be good in narrow domains. This is another argument for oversight.

The notion of agency also brought up the issue of decisions.  Vint inquired why we were so lazy in making decisions. He argued that we’re making systems we no longer understand!  I didn’t get the chance to answer that decision-making is cognitively taxing.   As a consequence, we often work to avoid it.  Moreover, some of us are interested in X, so are willing to invest the effort to learn it, while others are interested in Y. So it may not be reasonable to expect everyone to invest in every decision.  Also, our lives get more complex; when I grew up, you just  had phone and TV, now you need to worry about internet, and cable, and mobile carriers, and smart homes, and…  So it’s not hard to see why we want to abrogate responsibility when we can!  But when can we, and when do we need to be careful?

Of course, one of the issues is about AI taking jobs.  Cerf stated that nnovation takes jobs, and generates jobs as well. However, the problem is that those who lose the jobs aren’t necessarily capable of taking the new ones.  Which brought up an increasing need for learning to learn, as the key ability for people. Which I support, of course.

The overall problem is that there isn’t a central agreement on what ethics a system should embody, if we  could do it. We currently have different cultures with different values. Could we find agreement when some might have different view of what, say, acceptable surveillance would be? Is there some core set of values that are required for a society to ‘get along’?  However, that might vary by society.

At the end, there were two takeaways.  For one, the question is whether AI can helps us help ourselves!  And the recommendation is that we should continue to reflect and share our thoughts. This is my contribution.

Realities 360 Reflections

1 August 2017 by Clark 1 Comment

So, one of the two things I did last week was attend the eLearning Guild‘s Realities 36o conference.  Ostensibly about Augmented Reality (AR)  and Virtual Reality (VR), it ended up being much more about VR. Which isn’t a bad thing, it’s probably as much a comment on the state of the industry as anything.  However, there were some interesting learnings for me, and I thought I’d share them.

First, I had a very strong visceral exposure to VR. While I’ve played with Cardboard on the iPhone (you can find a collection of resources for Cardboard  here), it’s not quite the same as a full VR experience.  The conference provided a chance to try out apps for the HTC Vive, Sony Playstation VR, and the Oculus.  On the Vive, I tried a game where you shot arrows at attackers.  It was quite fun, but mostly developed some motor skills. On the Oculus, I flew an XWing fighter through an asteroid field and escorted a ship and shoot enemy Tie-fighters.  Again, fun, but mostly about training my motor skills in this environment.

It was the one I think on the Vive that gave me an experience.  In it, you’re floating around the International Space Station. And it was very cool to see the station and experience the immersion of 3D, but it was very uncomfortable.  Partly because I was trying to fly around (instead of using handholds), my viewpoint would fly through the bulkhead doors. However, the positioning meant that it gave the visual clues that my chest was going through the metal edge.  This was extremely disturbing to me!  As I couldn’t control it well, I was doing this continually, and I didn’t like it. Partly it was the control, but it was also the total immersion. And that was impressive!

There are empirical results that demonstrate better learning outcomes for VR, and certainly  I can see that particularly, for tasks inherently 3D. There’s also another key result, as was highlighted in the first keynote: that VR is an ’empathy’ machine. There have been uses for things like understanding the world according to a schizophrenic, and a credit card call center helping employees understand the lives of card-users.

On principle, such environs should support near transfer when designed to closely mimic the actual performance environment. (Think: flight or medicine simulators.)  And the tools are getting better. There’s an app that allows you to take photos of a place to put into Cardboard, and game engines (Unity or Unreal or both) will now let you import AutoCAD models.  There was also a special camera that could sense the distances in a space and automatically generate a model of it.  The point being that it’s getting easier and easier to generate VR environments.

That, I think, is what’s holding AR back.  You can fairly easily use it for marker or location based information, but actually annotating the world visually is still challenging.  I still think AR is of more interest, (maybe just to me), because I see it eventually creating the possibility to see the causes and factors  behind the world, and allow us to understand it better.  I could argue that VR is just extending sims from flat screen to surround, but then I think about the space station, and…I’m still pondering that. Is it revolutionary or just evolutionary?

One session talked about trying to help folks figure out when VR and AR made sense, and this intrigued me. It reminded me that I had tried to characterize the affordances of virtual worlds, and I reckon it’s time to take a stab at doing this for VR and AR.  I believed then that I was able to predict when virtual worlds would continue to find value, and I think results have borne that out.  So, the intent is to try to get on top of when VR and AR make sense.  Stay tuned!

Barry Downes #Realities360 Keynote Mindmap

27 July 2017 by Clark Leave a Comment

Barry Downes talked about the future of the VR market with an interesting exploration of the Immersive platform. Taking us through the Apollo 11 product, he showed what went into it and the emotional impact. He showed a video that talked (somewhat simplistically) about how VR environments could be used for learning. (There is great potential, but it’s not about content.). He finished with an interesting quote about how VR would be able to incorporate any further media. A second part of the quote said:  “Kids will think it’s funny [we] used to stare at glowing rectangles hoping to suspend disbelief.”

VR Keynote

Maxwell Planck #Realities360 Keynote Mindmap

26 July 2017 by Clark 2 Comments

Maxwell Planck opened the eLearning Guild’s Realities 360 conference with a thoughtful and thought-provoking talk on VR. Reflecting on his experience in the industry, he described the transition from story telling to where he thinks we should go: social adventure. (I want to call it “adventure together”. :). A nice start to the event.

Maxwell Planck Keynote Mindmap

What is the Future of Work?

25 July 2017 by Clark Leave a Comment

which is it?Just what is the Future of Work  about? Is it about new technology, or is it about how we work with people?  We’re seeing  amazing new technologies: collaboration platforms, analytics, and deep learning. We’re also hearing about new work practices such as teams, working (or reflecting) out loud, and more.  Which is it? And/or how do they relate?

It’s very clear technology is changing the way we work. We now work digitally, communicating and collaborating.  But there’re more fundamental transitions happening. We’re integrating data across silos, and mining that data for new insights. We can consolidate platforms into single digital environments, facilitating the work.  And we’re getting smart systems that do things our brains quite literally can’t, whether it’s complex calculations or reliable rote execution at scale. Plus we have technology-augmented design and prototyping tools that are shortening the time to develop and test ideas. It’s a whole new world.

Similarly, we’re seeing a growing understanding of work practices that lead to new outcomes. We’re finding out that people work better when we create environments that are psychologically safe, when we tap into diversity, when we are open to new ideas, and when we have time for reflection. We find that working in teams, sharing and annotating our work, and developing learning and personal knowledge mastery skills all contribute. And we even have new  practices such as agile and design thinking that bring us closer to the actual problem.  In short, we’re aligning practices more closely with how we think, work, and learn.

Thus, either could be seen as ‘the Future of Work’.  Which is it?  Is there a reconciliation?  There’s a useful way to think about it that answers the question.  What if we do either without the other?

If we use the new technologies in old ways, we’ll get incremental improvements.  Command and control, silos, and transaction-based management can be supported, and even improved, but will still limit the possibilities. We can track closer.  But we’re not going to be fundamentally transformative.

On the other hand, if we change the work practices, creating an environment where trust allows both safety  and accountability, we can get improvements whether we use technology or not. People have the capability to work together using old technology.  You won’t get the benefits of some of the improvements, but you’ll get a fundamentally different level of engagement and outcomes than with an old approach.

Together, of course, is where we really want to be. Technology can have a transformative amplification to those practices. Together, as they say, the whole is greater than the some of the parts.

I’ve argued that using new technologies like virtual reality and adaptive learning only make sense  after you first implement good design (otherwise you’re putting lipstick on a pig, as the saying goes).  The same is true here. Implementing radical new technologies on top of old practices that don’t reflect what we know about people, is a recipe for stagnation.  Thus, to me, the Future of Work starts with practices that align with how we think, work, and learn, and are augmented with technology, not the other way around.  Does that make sense to you?

Augmented Reality Lives!

20 July 2017 by Clark Leave a Comment

Visually Augmented RealityAugmented Reality (AR) is on the upswing, and I think this is a good thing. I think AR makes sense, and it’s nice to see both solid tool support and real use cases emerging.  Here’s the news, but first, a brief overview of why I like AR.

As I’ve noted before, our brains are powerful, but flawed.  As with any architecture, any one choice will end up with tradeoffs. And we’ve traded off detail for pattern-matching.  And, technology is the opposite: it’s hard to get technology to do pattern matching, but it’s really good at rote. Together, they’re even more powerful. The goal is to most appropriately augment our intellect with technology to create a symbiosis where the whole is greater than the sum of the parts.

Which is why I like AR: it’s about annotating the world with information, which augments it to our benefit.  It’s contextual, that is, doing things  because  of when and where we are.  AR augments sensorily, either auditory or visual (or kinesthetic, e.g. vibration).  Auditory and kinesthetic annotation is relatively easy; devices generate sounds or vibrations (think GPS: “turn left here”).  Non-coordinated visual information, information that’s not overlaid visually, is presented as either graphics or text (think Yelp: maps and distances to nearby options).  Tools already exist to do this, e.g. ARIS.  However, arguably the most compelling and interesting is the aligned visuals.

Google Glass was a really interesting experiment, and it’s back.  The devices – glasses with camera and projector that can present information on the glass – were available, but didn’t do much because of where you were looking. There were generic heads-up displays and camera, but little alignment between what was seen and what was consequently presented to the user with additional information.  That’s changed. Google Glass has a new Enterprise Edition, and it’s being used to meet real needs and generate real outcomes. Glasses are supporting accurate placement in manufacturing situations requiring careful placement.  The necessary components and steps are being highlighted on screen, and reducing errors and speeding up outcomes.

And Apple has released it’s Augmented Reality software toolkit, ARKit, with features to make AR easy.  One interesting aspect is built-in machine learning, which could make aligning with objects in the world easy!  Incompatible platforms and standards impede progress, but with Google and Apple creating tools for each of their platforms, development can be accelerated. (I hope to find out more at the eLearning Guild’s Realities 360 conference.)

While I think Virtual Reality (VR) has an important role to play for deep learning, I think contextual support can be a great support for extending learning (particularly personalization), as well as performance support.  That’s why I’m excited about AR. My vision has been that we’ll have a personal coaching system that will know where and when we are and what our goals are, and be able to facilitate our learning and success. Tools like these will make it easier than ever.

FocusOn Learning reflections

27 June 2017 by Clark Leave a Comment

If you follow this blog (and you should :), it was pretty obvious that I was at the FocusOn Learning conference in San Diego last week (previous 2 posts were mindmaps of the keynotes). And it was fun as always.  Here are my reflections on what happened a bit more, as an exercise in meta-learning.

There were three themes to the conference: mobile, games, and video.  I’m pretty active in the first two (two books on the former, one on the latter), and the last is related to things I care and talk about.  The focus led to some interesting outcomes: some folks were very interested in just one of the topics, while others were looking a bit more broadly.  Whether that’s good or not depends on your perspective, I guess.

Mobile was present, happily, and continues to evolve.  People are still talking about courses on a phone, but more folks were talking about extending the learning.  Some of it was pretty dumb – just content or flash cards as learning augmentation – but there were interesting applications. Importantly, there was a growing awareness about performance support as a sensible approach.  It’s nice to see the field mature.

For games, there were positive and negative signs.  The good news is that games are being more fully understood in terms of their role in learning, e.g. deep practice.  The bad news is that there’s still a lot of interest in gamification without a concomitant awareness of the important distinctions. Tarting up drill-and-kill with PBL (points, badges, and leaderboards; the new acronym apparently)  isn’t worth significant interest!  We know how to drill things that must be, but our focus  should be on intrinsic interest.

As a side note, the demise of Flash has left us without a good game development environment. Flash is both a development environment and a delivery platform. As a development environment  Flash had a low learning threshold, and yet could be used to build complex games.  As a delivery platform, however, it’s woefully insecure (so much so that it’s been proscribed in most browsers). The fact that Adobe couldn’t be bothered to generate acceptable HTML5 out of the development environment, and let it languish, leaves the market open for another accessible tool. And Unity or Unreal provide good support (as I understand it), but still require coding.  So we’re not at an easily accessible place. Oh, for HyperCard!

Most of the video interest was either in technical issues (how to get quality and/or on the cheap), but a lot of interest was also in interactive video. I think branching video is a real powerful learning environment for contextualized decision making.  As a consequence the advent of tools that make it easier is to be lauded. An interesting session with the wise Joe Ganci (@elearningjoe) and a GoAnimate guy talked about when to use video versus animation, which largely seemed to reflect my view (confirmation bias ;) that it’s about whether you want more context (video) or concept (animation). Of course, it was also about the cost of production and the need for fidelity (video more than animation in both cases).

There was a lot of interest in VR, which crossed over between video and games.  Which is interesting because it’s not inherently tied to games or video!  In short,  it’s a delivery technology.  You can do branching scenarios, full game engine delivery, or just video in VR. The visuals can be generated as video or from digital models. There was some awareness, e.g. fun was made of the idea of presenting powerpoint in VR (just like 2nd Life ;).

I did an ecosystem presentation that contextualized all three (video, games, mobile) in the bigger picture, and also drew upon their cognitive and then L&D roles. I also deconstructed the game Fluxx (a really fun game with an interesting ‘twist’). Overall, it was a good conference (and nice to be in San Diego, one of my ‘homes’).

Tech and School Problems

14 June 2017 by Clark Leave a Comment

After yesterday’s rant about problems in local schools, I was presented with a recent New York Times article. In it, they talked about how the tech industry was getting involved in schools. And while the initiatives seem largely well-intentioned, they’re off target.   There’s a lack of awareness of what meaningful learning is, and what meaningful outcomes could and should be.  And so it’s time to shed a little clarity.

Tech in schools is nothing new, from the early days of Apple and Microsoft vying to provide school computers and getting a leg up on learners’ future tech choices.  Now, however, the big providers have even more relative leverage. School funds continue to be cut, and the size of the tech companies has grown relative to society. So there’s a lot of potential leverage.

One of the claims in the article is that the tech companies are able to do what they want, and this  is a concern. They can dangle dollars and technology as bait and get approval to do some interesting and challenging things.

However, some of the approaches have issues beyond the political:

One approach is to teach computer science to every student.  The question is: is this worth it?  Understanding what computers do well (and easily), and perhaps more importantly what they don’t, is necessary, no argument. The argument for computer programming is that it teaches you to break down problems and design solutions. But is computer science necessary?  Could it be done with, say, design thinking?  Again, all for helping learners acquire good problem-solving skills.  But I’m not convinced that this is necessarily a good idea (as beneficial as it is to the tech industry ;).

Another initiative is using algorithms, rules like the ones that Facebook uses to choose what ads to show you, to sequence math.  A program, ALEKS, already did this, but this one mixes in gamification. And I think it’s patching a bad solution. For one, it appears to be using the existing curriculum, which is broken (too much rote abilities, too little transferable skills).  And gamification?  Can’t we,  please, try to make math intrinsically interesting by making it useful?  Abstract problems don’t help.  Drilling key skills is good, but there are nuances in the details.

A second approach has students choosing the problems they work on, and teachers being facilitators.  Of course, I’m a fan of this; I’ve advocated for gradually handing off control of learning to learners, to facilitate their development of self-learning. And in a recently-misrepresented announcement, Finland is moving to topics with interleaved skills rapped around them (e.g. not one curricula, but you might intersect math and chemistry in studying ecosystems. However, this takes teachers with skills across both domains, and the ability to facilitate discussion  around projects.  That’s a big ask, and has been a barrier to many worthwhile initiatives.   Compounding this is that the end of a unit is assessed by a 10-point multiple choice question.  I worry about the design of those assessments.

I’m all for school reform. As Mark Warschauer put it, the only things wrong with American education is the curriculum, the pedagogy, and the way we use technology.  I think the pedagogy being funded in the latter description is a good approach, but there are details that need to be worked out to make it a scalable success.  And while problem-solving is a good curricular goal, we need to be thoughtful about how we build it in. Further, motivation is an important component about learning, but intrinsic or extrinsic?

We really could stand to have a deeper debate about learning and how technology can facilitate it. The question is: how do we make that happen?

Evil design?

6 June 2017 by Clark 1 Comment

This is a rant, but it’s coupled with lessons.  

I’ve been away, and one side effect was a lack of internet bandwidth at the residence.  In the first day I’d used up a fifth of the allocation for the whole time (> 5 days)!  So, I determined to do all I could to cut my internet usage while away from the office.  The consequences of that have been heinous, and  on the principle of “it’s ok to lose, but don’t lose the lesson”, I want to share what I learned.  I don’t think it was evil, but it well could’ve been, and in other instances it might be.

So, to start, I’m an Apple fan.  It started when I followed the developments at Xerox with SmallTalk and the Alto as an outgrowth of Alan Kay‘s Dynabook work. Then the Apple Lisa was announced, and I knew this was the path I was interested in. I did my graduate study in a lab that was focused on usability, and my advisor was consulting to Apple, so when the Mac came out I finally justified a computer to write my PhD thesis on. And over the years, while they’ve made mistakes (canceling HyperCard), I’ve enjoyed their focus on making me more productive. So when I say that they’ve driven me to almost homicidal fury, I want you to understand how extreme that is!

I’d turned on iCloud, Apple’s cloud-based storage.  Innocently, I’d ticked the ‘desktop/documents’ syncing (don’t).  Now, with  every other such system that I know of, it’s stored locally *and* duplicated on the cloud.  That is, it’s a backup. That was my mental model.  And that model was reinforced:  I’d been able to access my files even when offline.  So, worried about the bandwidth of syncing to the cloud, I turned it off.

When I did, there was a warning that  said something to the effect of: “you’ll lose your desktop/documents”.  And, I admit, I didn’t interpret that literally (see: model, above).  I figured it would disconnect their syncing. Or I’d lose the cloud version. Because, who would actually steal the files from your hard drive, right?

Well, Apple DID!  Gone. With an option to have them transferred, but….

I turned it back on, but didn’t want to not have internet, so I turned it off again but ticked the box that said to copy the files to my hard drive.  COPY BACK MY OWN @##$%^& FILES!  (See fury, above.)   Of course, it started, and then said “finishing”.  For 5 days!  And I could see that my files weren’t coming back in any meaningful rate. But there was work  to do!

The support  guy I reached had some suggestion that really didn’t work. I did try to drag my entire documents folder from the iCloud drive to my hard drive, but it said it was making the estimate of how long, and hung on that for a day and a half.  Not helpful.

In meantime, I started copying over the files I needed to do work. And continuing to generate the new ones that reflected what I was working on.  Which meant that the folders in the cloud, and the ones on my hard drive that I  had  copied over, weren’t in sync any longer.  And I have a  lot of folders in my documents folder.  Writing, diagrams, client files, lots of important information!

I admit I made some decisions in my panic that weren’t optimal.  However, after returning I called Apple again, and they admitted that I’d have to manually copy stuff back.  This has taken hours of my time, and hours yet to go!

Lessons learned

So, there are several learnings from this.  First, this is bad design. It’s frankly evil to take someone’s hard drive files after making it easy to establish the initial relationship.  Now, I don’t  think Apple’s intention was to hurt me this way, they just made a bad decision (I hope; an argument could be made that this was of the “lock them in and then jack them up” variety, but that’s contrary to most of their policies so I discount it).  Others, however,  do make these decisions (e.g. providers of internet and cable from whom you can only get a 1 or 2  year price which will then ramp up  and unless you remember to check/change, you’ll end up paying them more than you should until you get around to noticing and doing something about it).  Caveat emptor.

Second, models are important and can be used for or against you. We do  create models about how things work and use evidence to convince ourselves of their validity (with a bit of confirmation bias). The learning lesson is to provide good models.  The warning is to check your models when there’s a financial stake that could take advantage of them for someone else’s gain!

And the importance of models for working and performing is clear. Helping people get good models is an important boost to successful performance!  They’re not necessarily easy to find (experts don’t have access to 70% of what they do), but there are ways to develop them, and you’ll be improving your outcomes if you do.

Finally, until Apple changes their policy, if you’re a Mac and iCloud user I  strongly recommend you avoid the iCloud option to include Desktop and Documents in the cloud unless you can guarantee that you won’t have a bandwidth blockage.  I like the idea of backing my documents to the cloud, but not when I can’t turn it off without losing files. It’s a bad policy that has unexpected consequences to user expectations, and frankly violates my rights to  my data.

We now return you to our regularly scheduled blog topics.

 

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok