Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: top 10

Conceptual Clarity

6 December 2017 by Clark 1 Comment

Ok, so I can be a bit of a pedant.  Blame it on my academic background, but I believe conceptual clarity is important! If we play fast and loose with terminology, we can be be convinced of something without truly understanding it.  Ultimately, we can waste money chasing unwarranted directions, and worse, perhaps even do wrong by our learners.

Where do the problems arise?  Sometimes, it’s easy to ride a bizbuzz bandwagon.  Hey, the topic is hot, and it sounds good.  Other times, it’s just too hard to spend the effort. Yet getting it wrong ends up meaning you’re wasting resources.

Let’s be clear, I’m not talking myths. Those abound, but here I’m talking about ideas that are being used relatively indiscriminately, but in at least one interpretation there’s real value.  The important thing is to separate the wheat from the chaff.

Some concepts that are running around recently and could use some clarity are the following:

Microlearning.  I tried to be clear about this here. In short, microlearning is about small chunks where the learning aggregates over time.  Aka spaced learning.  But other times, people really mean performance support (just-in-time help to succeed in the moment). What you don’t want is someone pretending it’s so unique that they can trademark it.

70:20:10.  This is another that some people deride, and others find value in. I’ve also talked about this.   The question is why they differ, and my answer is that the folks who use it as a way to think more clearly about a whole learning experience find value. Those who fret about the label are missing the point.  And I acknowledge that the label is a barrier, but that horse has bolted.

Neuro- (aka brain- ). Yes, our brains are neurologically based. And yes, there are real implications. Some.  Like ‘the neurons that fire together, wire together’.  And yet there’re a whole lot of discussions about neuro that are really at the next higher level: cognitive.  This is just misleading folks to make it sound more scientific.

Unlearning. There’s a lot of talk about unlearning, but in the neurological sense it doesn’t make sense. You don’t unlearn something.  As far as we can tell, it’s still there, just increasingly hard to activate. The only real way to ‘unlearn’ is to learn some other response to the same situation.  You learn ‘over’ the old learning. Or overlearn.  But not unlearn. It’s an unconcept.

Gamification. This is actually the one that triggered this post. In theory, gamification is the application of game mechanics to learning.  Interestingly, Raph Koster wrote that what makes games fun are that they are intrinsically about learning!  However, there are important nuances.  It’s not just about adding PBL (points, badges, and leaderboards). These aren’t bad things, but they’re secondary.  Designing the intrinsic action around the decisions learners need to acquire is a deeper and more meaningful implication.  Yet people tend to ignore the latter because it’s ‘harder’.  Yet it’s really just about good learning design.

There are more, of course, but hopefully these illustrate the problem. (What are yours?)  Please, please, be professional and take the time to get clear about our cognitive architecture enough to ensure that you can make these distinctions on your own. We need the conceptual clarity!  Hopefully then we can reserve excitement for ideas that truly add value.

#AECT17 Conference Contributions

16 November 2017 by Clark 1 Comment

So, at the recent AECT 2017 conference, I participated in three ways that are worth noting.  I had the honor of participating in two sessions based upon writings I’d contributed, and one based upon my own cogitations. I thought I’d share the thinking.

For my own presentation, I shared my efforts to move ‘rapid elearning’ forward. I put Van Merrienboer’s 4 Component ID and Guy Wallace’s Lean ISD as a goal, but recognized the need for intermediate steps like Michael Allen’s SAM, David Merrill’s ‘Pebble in a Pond‘, and Cathy Moore’s Action Mapping. I suggested that these might be too far, and want steps that might be slight improvements on their existing processes. These included three thing: heuristics, tools, and collaboration. Here I was indicating specifics for each that could move from well-produced to well-designed.

In short, I suggest that while collaboration is good, many corporate situations want to minimize staff. Consequently, I suggest identifying those critical points where collaboration will be useful. Then, I suggest short cuts in processes to the full approach. So, for instance, when working with SMEs focus on decisions to keep the discussion away from unnecessary knowledge. Finally, I suggest the use of tools to support the gaps our brain architectures create.   Unfortunately, the audience was small (27 parallel sessions and at the end of the conference) so there wasn’t a lot of feedback. Still, I did have some good discussion with attendees.

Then, for one of the two participation session, the book I contributed to solicited a wide variety of position papers from respected ed tech individuals, and then solicited responses to same.  I had responded to a paper suggesting three trends in learning: a lifelong learning record system, a highly personalized learning environment, and expanded learner control of time, place and pace of instruction. To those 3 points I added two more: the integration of meta-learning skills and the breakdown of the barrier between formal learning and lifelong learning. I believe both are going to be important, the former because of the decreasing half-life of knowledge, the latter because of the ubiquity of technology.

Because the original author wasn‘t present, I was paired for discussion with another author who shares my passion for engaging learning, and that was the topic of our discussion table.  The format was fun; we were distributed in pairs around tables, and attendees chose where to sit. We had an eager group who were interested in games, and my colleague and I took turns answering and commenting on each other’s comments. It was a nice combination.  We talked about the processes for design, selling the concept, and more.

For the other participation session, the book was a series of monographs on important topics.  The discussion chose a subset of four topics: MOOCs, Social Media, Open Resources, and mLearning. I had written the mLearning chapter.  The chapter format included ‘take home’ lessons, and the editor wanted our presentations to focus on these. I posited the basic mindshifts necessary to take advantage of mlearning. These included five basic principles:

  1. mlearning is not just mobile elearning; mlearning is a wide variety of things.
  2. the focus should be on augmenting us, whether our formal learning, or via performance support, social, etc.
  3. the Least Assistance Principle, in focusing on the core stuff given the limited interface.
  4. leverage context, take advantage of the sensors and situation to minimize content and maximize opportunity.
  5. recognize that mobile is a platform, not a tactic or an app; once you ‘go mobile’, folks will want more.

The sessions were fun, and the feedback was valuable.

Addressing Changes

25 October 2017 by Clark Leave a Comment

Yesterday, I listed some of the major changes that L&D needs to acknowledge. What we need now is to look at the top steps that need to be taken.  As serious practitioners in a potentially valuable field, we need to adapt to the changing environment as much as we need to assist our charges to do so. So what’s involved?

We need to get a grasp on technology  affordances. We don’t need to that the latest technology exists, whether AI, AR, or VR.  Instead, we have to understand what they mean  in the context of our brains.  What key capabilities are brought?  Can VR go beyond entertainment to help us learn better? How can AI partner with us?  If we can make practical use of AR, what would we do with it?

In conjunction, we need to  understand the realities about us.  We need to take ownership and have a suitable background in how people  really think, work, and learn. Further, we need to recognize that they’re all tied together, not separate things. So, for instance, we learn as we work, we think as we learn, etc.

For example, we need to understand situated and distributed cognition. That is, we need to grasp that we’re not formal logical thinkers, but instead very context dependent, and that our thinking is across our tools. As a consequence, we need to design solutions that recognize our individual situations, and leverage technology as an augment. So we want to design human/computer system solutions to problems, not just human or system solutions.

We also need to understand cultural elements. We work better when we are given meaningful work, freedom to pursue those goals, and get the necessary support to succeed. This is  not micromanagement, but instead, is leadership and coaching. We also need an environment where it’s safe, expected even, to experiment and even to make mistakes.

We also need to understand that we work better (read: produce better results), when we work together in particular ways. Where we understand that we should allow individual thought first, but then pool those ideas. And we need to show our work and the underlying thinking. Moreover, again, it has to be safe to do so!

And, these are all tied together into a systemic approach!  It can’t be piecemeal, because working together and out loud can’t be divorced from the technology used to enable these capabilities. And giving people meaningful work and not letting them work together, or vice-versa, just won’t achieve the necessary critical mass.

Finally, we also need to do this in alignment with the business. And, lets be clear, in ways that can be measured!  We need to be understanding what are the critical performance needs of the organization, and demonstrate that we’re impacting them in the ways above.

This can be done, and it will be the hallmark of successful organization. We’re already seeing a wide variety of converging evidence that these changes lead to success. The question is, are you going to lead your organization forward into the future, or keep your head down and do what you’ve always done?

Stay Curious

18 October 2017 by Clark Leave a Comment

One of my ongoing recommendations to people grew out of a toss-off line, playing off an advertisement. Someone asked about a strategy for continuing to learn (if memory serves), and I quipped “stay curious, my friends”.  However, as I ponder it, I think more and more that such an approach is key.

I was thinking of this trend the other day as “intellectual restlessness”. What I’m talking about is being intrigued by things you don’t understand that have persisted or recently crossed your awareness, and pursuing it.  It’s not just saying “how interesting”, but recognizing connections, and pondering how it could change what you do. Even to the point of actually changing!

It also would include pointing interesting things to other people who would benefit.  This doesn’t always have to happen, but in the spirit of cooperation (in the Jarche sense), we could and should contribute, curate, when we can.  And, ideally, leaving trails of your explorations that others can benefit from. Writings, diagrams, videos, what have you, helps yourself as well as others.

Old Infoworld magazinesI was reminiscing that more than 30 years ago, on top of my job designing educational computer games, I was already curious. I still have copies of the magazines containing reviews I did (one hardware, one software), as well as a journal article based upon undergraduate research I was fortunate to participate in.

And that persistence in curiosity has led to a trail of artefacts. You may have come across the books, book chapters, articles, presentations, etc. And, of course, this blog for the past decade and more. (May it continue!) However, I’m not here to tout my wares, but instead to point to the benefit of being curious.

As things change faster, a continuing interest is what provides an ongoing ability to adapt. All the news about the ongoing changes in jobs and work isn’t likely to lessen.  Staying curious benefits you, your colleagues and friends, and I reckon society in general.  You want to look at many sources of information, track tangential fields, and be open to new ideas.

This isn’t just your choice, of course, ideally your organization is supportive. These lateral inputs are a component of innovation, as is time to allow for serendipity and incubation. Orgs that want to be able to be agile will need this capabilities as well. I suppose organizations need to stay curious as well!

 

Organizational terms

26 September 2017 by Clark Leave a Comment

Listening to a talk last week led me to ponder the different terms for what it is I lobby for.  The goal is to make organizations accomplish their goals, and to continue to be able to do so.  In the course of my inquiry, I explored and uncovered several different ‘organizational’ terms.  I thought I should lay them out here for my (and your) thoughts.

For one, it seemed to be about organizational  effectiveness. That is, the goal is to make organizations not just efficient, but capable of optimal levels of performance.  When you look at the Wikipedia definition, you find that they’re about “achieving the outcomes the organization intends to produce”.  They do this through alignment, increasing tradeoffs, and facilitating capacity building.  The definition also discusses improvements in decision making, learning, group work, and tapping into the strictures of self-organizing and adaptive systems, all of which sound right.

Interesting, most of the discussion seems to focus on not-for-profit organizations. While I agree on their importance, and have done considerable work with such organizations, I guess I’d like to see a broader focus. Also, and this is purely my subjective opinion, the newer thoughts seem grafted on, and the core still seems to be about producing good numbers. Any time you use the phrase ‘human capital’, I am leery.

Organizational engineering is a phrase that popped to mind (similar to learning engineering). Here, Wikipedia defines it as an offshoot of org development, with a focus on information processing. And, coming from cognitive psychology, that sounds good, with a caveat.  The reality is, we’re flawed as ideal thinkers. And in the definition it also talks about ‘styles’, which are a problem all on their own. Overall, this appears to be more a proprietary suite of approaches under a label. While it uses nice sounding terms, the reality (again, my inferences here) is that it may be designed for an audience that doesn’t exist.

The final candidate is organizational development. Here the definition touts “implementing effective change”. The field is defined as interdisciplinary and drawing on psych, sociology, and more.  In addition to systems thinking and and decision-making, there’s an emphasis on organizational learning and on coaching, so it appears more human-focused. The core values also talk about human beings being valued for themselves, not as resources, and looking at the complex picture.  Overall this approach resonates with me more, not just philosophically, but pragmatically.

As I look at what’s emerging from the scientific study of people and organizations, as summed up in a variety of books I’ve touted here, there are some very clear  lessons. For, one, people respond when you treat the as meaningful parts of a worthwhile endeavor. When you value people’s input and trust them to apply their talents to the goals, things get done. Caring enough to develop them in ways that are supportive, not punitive, and not just your goals but theirs’ too, retains their interest and commitment. And when you provide them with an environment to succeed and improve, you get the best organizational outcomes.

There’s more about how to get started.  Small steps, such as working in a small group (*cough* L&D? *cough* ;), and developing the practices and the infrastructure, then spreading, has been shown to be better than a top-down initiative. Experimenting and reviewing the outcomes, and continually tweaking likewise.  Ensuring that it’s coaching, not ‘managing’ (managers are the primary reason people leave companies).  Etc.

All this shouldn’t be a surprise, but it’s not trivial to do but takes persistence.  And, it flies in the face of much of management and HR practices.  I don’t really care what we label it, I just want to find a way to talk about things that makes it easy for people to know what I’m talking about.  There are goals to achieve, so my main question is how do we get there?  Anyone want to get started?

Simulations versus games

9 August 2017 by Clark Leave a Comment

At the recent Realities 360 conference, I saw some confusion about the difference between a simulation and a game. And while I made some important distinctions in my book on the topic, I realize that it’s possible that it’s time to revisit them. So here I’m talking about some conceptual discriminations that I think are important.

Simulations

As I’ve mentioned, simulations are models of the world. They capture certain relationships we believe to be true about the world. (For that matter, they can represent worlds that aren’t real, certainly the case in games.). They don’t (can’t) capture all the world, but a segment we feel it is important to model. We tend to validate these models by testing them to see if they behave like our real world.  You can also think about simulations as being in a ‘state’ (set of values in variables), and move to others by rules.  Frequently, we include some variability in these models, just as is reflected in the real world. Similarly, these simulations can model considerable complexity.

Such simulations are built out of sets of variables that represent the state of the world, and rules that represent the relationships present. There are several ways things change. Some variables can be changed by rules that act on the basis of time (while countdown timer = on, countdown = countdown -1). Variables can also interact (if countdown=0: if 1 g adamantium and 1 g dilithium, Temperature = Temperature +1000, adamantium = adamantium – 1g, dilithium = dilithium – 1g).  Other changes are based upon learner actions (if learner flips the switch, countdown timer = on).

Note that you may already have a simulation. In business, there may already exist a model of particular processes, particularly if they’re proprietary systems.

From a learning point of view, simulations allow motivated and self-effective learners to explore the relationships they need to understand. However, we can’t always assume motivated and self-effective learners. So we need some additional work to turn a simulation into a learning experience.

Scenarios

One effective way to leverage simulations is to choose an initial state (or ‘space of states’, a start point with some variation), and a state (or set) that constitutes ‘win’. We also typically have states that also represent ‘fail’.  We choose those states so that the learner can’t get to ‘win’ without understanding the necessary relationships.   The learner can try and fail until they discover the necessary relationships.  These start and goal states serve as scaffolding for the learning process.    I call these simulations with start and stop states ‘scenarios’.

This is somewhat complicated by the existence of ‘branching scenarios’. There are initial and goal states and learner actions, but they are  not represented by variable and rules. The relationships in branching scenarios are implicit in the links instead of explicit in the variables and rules. And they’re easier to build!  Still, they don’t have the variability that typically is possible in a simulation. There’s an inflection point (qualitative, not quantitative) where the complexity of controlling the branches renders it more sensible to model the world as a simulation rather than track all the branches.

Games

The problem here is that too often people will build a simulation and call it a game. I once reviewed a journal submission about a ‘game’ where the authors admitted that players thought it was boring. Sorry, then it’s not a game!  The difference between a simulation and a game is a subjective experience of engagement on the part of the player.

So how do you get from a simulation to a game?  It’s about tuning.  It’s about adjusting the frequency of events, and their consequences, such that the challenge moves to fall into the zone between boring and frustrating. Now, for learning, you can’t change the fundamental relationships you’re modeling, but you can adjust items like how quickly events occur, and the importance of being correct. And it takes testing and refinement. Will Wright, a game designers’ game designer, once proposed that tuning is 9/10’s of the work!  Now that’s for a commercial game, but it gives you and idea.

You can also use gamification, scores to add competition, but, please,  only after you first expend the effort to make the game intrinsically interesting. Tap into why they  should care about the experience, and bake that it.

Is it worth it to actually expend effort to make the experience engaging?  I believe that the answer is yes. Perhaps not to the level of a game people will pay $60 to play, but some effort to manifest the innate meaningfulness is worth it. Games minimize the time to obtain competency because they optimize the challenge.  You will have sticks as well as carrots, so you don’t need to put in $M budgets, but do tune until your learners have an engaging and effective experience.

So, does this help? What questions do you still have?

Realities 360 Reflections

1 August 2017 by Clark 1 Comment

So, one of the two things I did last week was attend the eLearning Guild‘s Realities 36o conference.  Ostensibly about Augmented Reality (AR)  and Virtual Reality (VR), it ended up being much more about VR. Which isn’t a bad thing, it’s probably as much a comment on the state of the industry as anything.  However, there were some interesting learnings for me, and I thought I’d share them.

First, I had a very strong visceral exposure to VR. While I’ve played with Cardboard on the iPhone (you can find a collection of resources for Cardboard  here), it’s not quite the same as a full VR experience.  The conference provided a chance to try out apps for the HTC Vive, Sony Playstation VR, and the Oculus.  On the Vive, I tried a game where you shot arrows at attackers.  It was quite fun, but mostly developed some motor skills. On the Oculus, I flew an XWing fighter through an asteroid field and escorted a ship and shoot enemy Tie-fighters.  Again, fun, but mostly about training my motor skills in this environment.

It was the one I think on the Vive that gave me an experience.  In it, you’re floating around the International Space Station. And it was very cool to see the station and experience the immersion of 3D, but it was very uncomfortable.  Partly because I was trying to fly around (instead of using handholds), my viewpoint would fly through the bulkhead doors. However, the positioning meant that it gave the visual clues that my chest was going through the metal edge.  This was extremely disturbing to me!  As I couldn’t control it well, I was doing this continually, and I didn’t like it. Partly it was the control, but it was also the total immersion. And that was impressive!

There are empirical results that demonstrate better learning outcomes for VR, and certainly  I can see that particularly, for tasks inherently 3D. There’s also another key result, as was highlighted in the first keynote: that VR is an ’empathy’ machine. There have been uses for things like understanding the world according to a schizophrenic, and a credit card call center helping employees understand the lives of card-users.

On principle, such environs should support near transfer when designed to closely mimic the actual performance environment. (Think: flight or medicine simulators.)  And the tools are getting better. There’s an app that allows you to take photos of a place to put into Cardboard, and game engines (Unity or Unreal or both) will now let you import AutoCAD models.  There was also a special camera that could sense the distances in a space and automatically generate a model of it.  The point being that it’s getting easier and easier to generate VR environments.

That, I think, is what’s holding AR back.  You can fairly easily use it for marker or location based information, but actually annotating the world visually is still challenging.  I still think AR is of more interest, (maybe just to me), because I see it eventually creating the possibility to see the causes and factors  behind the world, and allow us to understand it better.  I could argue that VR is just extending sims from flat screen to surround, but then I think about the space station, and…I’m still pondering that. Is it revolutionary or just evolutionary?

One session talked about trying to help folks figure out when VR and AR made sense, and this intrigued me. It reminded me that I had tried to characterize the affordances of virtual worlds, and I reckon it’s time to take a stab at doing this for VR and AR.  I believed then that I was able to predict when virtual worlds would continue to find value, and I think results have borne that out.  So, the intent is to try to get on top of when VR and AR make sense.  Stay tuned!

What is the Future of Work?

25 July 2017 by Clark Leave a Comment

which is it?Just what is the Future of Work  about? Is it about new technology, or is it about how we work with people?  We’re seeing  amazing new technologies: collaboration platforms, analytics, and deep learning. We’re also hearing about new work practices such as teams, working (or reflecting) out loud, and more.  Which is it? And/or how do they relate?

It’s very clear technology is changing the way we work. We now work digitally, communicating and collaborating.  But there’re more fundamental transitions happening. We’re integrating data across silos, and mining that data for new insights. We can consolidate platforms into single digital environments, facilitating the work.  And we’re getting smart systems that do things our brains quite literally can’t, whether it’s complex calculations or reliable rote execution at scale. Plus we have technology-augmented design and prototyping tools that are shortening the time to develop and test ideas. It’s a whole new world.

Similarly, we’re seeing a growing understanding of work practices that lead to new outcomes. We’re finding out that people work better when we create environments that are psychologically safe, when we tap into diversity, when we are open to new ideas, and when we have time for reflection. We find that working in teams, sharing and annotating our work, and developing learning and personal knowledge mastery skills all contribute. And we even have new  practices such as agile and design thinking that bring us closer to the actual problem.  In short, we’re aligning practices more closely with how we think, work, and learn.

Thus, either could be seen as ‘the Future of Work’.  Which is it?  Is there a reconciliation?  There’s a useful way to think about it that answers the question.  What if we do either without the other?

If we use the new technologies in old ways, we’ll get incremental improvements.  Command and control, silos, and transaction-based management can be supported, and even improved, but will still limit the possibilities. We can track closer.  But we’re not going to be fundamentally transformative.

On the other hand, if we change the work practices, creating an environment where trust allows both safety  and accountability, we can get improvements whether we use technology or not. People have the capability to work together using old technology.  You won’t get the benefits of some of the improvements, but you’ll get a fundamentally different level of engagement and outcomes than with an old approach.

Together, of course, is where we really want to be. Technology can have a transformative amplification to those practices. Together, as they say, the whole is greater than the some of the parts.

I’ve argued that using new technologies like virtual reality and adaptive learning only make sense  after you first implement good design (otherwise you’re putting lipstick on a pig, as the saying goes).  The same is true here. Implementing radical new technologies on top of old practices that don’t reflect what we know about people, is a recipe for stagnation.  Thus, to me, the Future of Work starts with practices that align with how we think, work, and learn, and are augmented with technology, not the other way around.  Does that make sense to you?

Tech and School Problems

14 June 2017 by Clark Leave a Comment

After yesterday’s rant about problems in local schools, I was presented with a recent New York Times article. In it, they talked about how the tech industry was getting involved in schools. And while the initiatives seem largely well-intentioned, they’re off target.   There’s a lack of awareness of what meaningful learning is, and what meaningful outcomes could and should be.  And so it’s time to shed a little clarity.

Tech in schools is nothing new, from the early days of Apple and Microsoft vying to provide school computers and getting a leg up on learners’ future tech choices.  Now, however, the big providers have even more relative leverage. School funds continue to be cut, and the size of the tech companies has grown relative to society. So there’s a lot of potential leverage.

One of the claims in the article is that the tech companies are able to do what they want, and this  is a concern. They can dangle dollars and technology as bait and get approval to do some interesting and challenging things.

However, some of the approaches have issues beyond the political:

One approach is to teach computer science to every student.  The question is: is this worth it?  Understanding what computers do well (and easily), and perhaps more importantly what they don’t, is necessary, no argument. The argument for computer programming is that it teaches you to break down problems and design solutions. But is computer science necessary?  Could it be done with, say, design thinking?  Again, all for helping learners acquire good problem-solving skills.  But I’m not convinced that this is necessarily a good idea (as beneficial as it is to the tech industry ;).

Another initiative is using algorithms, rules like the ones that Facebook uses to choose what ads to show you, to sequence math.  A program, ALEKS, already did this, but this one mixes in gamification. And I think it’s patching a bad solution. For one, it appears to be using the existing curriculum, which is broken (too much rote abilities, too little transferable skills).  And gamification?  Can’t we,  please, try to make math intrinsically interesting by making it useful?  Abstract problems don’t help.  Drilling key skills is good, but there are nuances in the details.

A second approach has students choosing the problems they work on, and teachers being facilitators.  Of course, I’m a fan of this; I’ve advocated for gradually handing off control of learning to learners, to facilitate their development of self-learning. And in a recently-misrepresented announcement, Finland is moving to topics with interleaved skills rapped around them (e.g. not one curricula, but you might intersect math and chemistry in studying ecosystems. However, this takes teachers with skills across both domains, and the ability to facilitate discussion  around projects.  That’s a big ask, and has been a barrier to many worthwhile initiatives.   Compounding this is that the end of a unit is assessed by a 10-point multiple choice question.  I worry about the design of those assessments.

I’m all for school reform. As Mark Warschauer put it, the only things wrong with American education is the curriculum, the pedagogy, and the way we use technology.  I think the pedagogy being funded in the latter description is a good approach, but there are details that need to be worked out to make it a scalable success.  And while problem-solving is a good curricular goal, we need to be thoughtful about how we build it in. Further, motivation is an important component about learning, but intrinsic or extrinsic?

We really could stand to have a deeper debate about learning and how technology can facilitate it. The question is: how do we make that happen?

Evil design?

6 June 2017 by Clark 1 Comment

This is a rant, but it’s coupled with lessons.  

I’ve been away, and one side effect was a lack of internet bandwidth at the residence.  In the first day I’d used up a fifth of the allocation for the whole time (> 5 days)!  So, I determined to do all I could to cut my internet usage while away from the office.  The consequences of that have been heinous, and  on the principle of “it’s ok to lose, but don’t lose the lesson”, I want to share what I learned.  I don’t think it was evil, but it well could’ve been, and in other instances it might be.

So, to start, I’m an Apple fan.  It started when I followed the developments at Xerox with SmallTalk and the Alto as an outgrowth of Alan Kay‘s Dynabook work. Then the Apple Lisa was announced, and I knew this was the path I was interested in. I did my graduate study in a lab that was focused on usability, and my advisor was consulting to Apple, so when the Mac came out I finally justified a computer to write my PhD thesis on. And over the years, while they’ve made mistakes (canceling HyperCard), I’ve enjoyed their focus on making me more productive. So when I say that they’ve driven me to almost homicidal fury, I want you to understand how extreme that is!

I’d turned on iCloud, Apple’s cloud-based storage.  Innocently, I’d ticked the ‘desktop/documents’ syncing (don’t).  Now, with  every other such system that I know of, it’s stored locally *and* duplicated on the cloud.  That is, it’s a backup. That was my mental model.  And that model was reinforced:  I’d been able to access my files even when offline.  So, worried about the bandwidth of syncing to the cloud, I turned it off.

When I did, there was a warning that  said something to the effect of: “you’ll lose your desktop/documents”.  And, I admit, I didn’t interpret that literally (see: model, above).  I figured it would disconnect their syncing. Or I’d lose the cloud version. Because, who would actually steal the files from your hard drive, right?

Well, Apple DID!  Gone. With an option to have them transferred, but….

I turned it back on, but didn’t want to not have internet, so I turned it off again but ticked the box that said to copy the files to my hard drive.  COPY BACK MY OWN @##$%^& FILES!  (See fury, above.)   Of course, it started, and then said “finishing”.  For 5 days!  And I could see that my files weren’t coming back in any meaningful rate. But there was work  to do!

The support  guy I reached had some suggestion that really didn’t work. I did try to drag my entire documents folder from the iCloud drive to my hard drive, but it said it was making the estimate of how long, and hung on that for a day and a half.  Not helpful.

In meantime, I started copying over the files I needed to do work. And continuing to generate the new ones that reflected what I was working on.  Which meant that the folders in the cloud, and the ones on my hard drive that I  had  copied over, weren’t in sync any longer.  And I have a  lot of folders in my documents folder.  Writing, diagrams, client files, lots of important information!

I admit I made some decisions in my panic that weren’t optimal.  However, after returning I called Apple again, and they admitted that I’d have to manually copy stuff back.  This has taken hours of my time, and hours yet to go!

Lessons learned

So, there are several learnings from this.  First, this is bad design. It’s frankly evil to take someone’s hard drive files after making it easy to establish the initial relationship.  Now, I don’t  think Apple’s intention was to hurt me this way, they just made a bad decision (I hope; an argument could be made that this was of the “lock them in and then jack them up” variety, but that’s contrary to most of their policies so I discount it).  Others, however,  do make these decisions (e.g. providers of internet and cable from whom you can only get a 1 or 2  year price which will then ramp up  and unless you remember to check/change, you’ll end up paying them more than you should until you get around to noticing and doing something about it).  Caveat emptor.

Second, models are important and can be used for or against you. We do  create models about how things work and use evidence to convince ourselves of their validity (with a bit of confirmation bias). The learning lesson is to provide good models.  The warning is to check your models when there’s a financial stake that could take advantage of them for someone else’s gain!

And the importance of models for working and performing is clear. Helping people get good models is an important boost to successful performance!  They’re not necessarily easy to find (experts don’t have access to 70% of what they do), but there are ways to develop them, and you’ll be improving your outcomes if you do.

Finally, until Apple changes their policy, if you’re a Mac and iCloud user I  strongly recommend you avoid the iCloud option to include Desktop and Documents in the cloud unless you can guarantee that you won’t have a bandwidth blockage.  I like the idea of backing my documents to the cloud, but not when I can’t turn it off without losing files. It’s a bad policy that has unexpected consequences to user expectations, and frankly violates my rights to  my data.

We now return you to our regularly scheduled blog topics.

 

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.