Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

29 September 2017

Mundanities

Clark @ 8:02 AM

This post is late, as my life has been a little less reflective, and a little more filled with some mundane issues.  There’re some changes here around the Quinnstitute, and they take bandwidth.  For a small update on these mundanities with some lessons:

standing deskFirst, I moved office from the side of the house back to the front. My son had occupied it, but he’s settled into an apartment for college, and I prefer the view out to the street (to keep an eye on the neighborhood). Of course, this entailed some changes:

My ergonomic chair stopped working, and it took several days to a) find out someone who’d repair it, b) get it there, wait for it to get fixed, and get it back.  It’s worth it (a lot less than replacing) and ergonomics is important.

Speaking of which, I also now could get a standup desk, or in my case one of those convertible desks that lets you raise and lower your workspace. I’ve been wanting one since the research has come out on the problems with sitting.  We’d previously constructed a custom desktop (with legs from Ikea!), for the odd shaped room, so it was desirable to just put it on top. So far, so good. Strongly recommended.

Also bought a used bookshelf (rather than move the one from the old office).  Real wood, real heavy.  Used those ‘forearm forklift’ straps to get it in. They work!  And, this being earthquake country, had to strap it to the wall. Still to come: filling with books.

At the same time, fed up with all the companies that provide internet and cable television, we decided to change. (We changed mobile providers back in January.)  As I noted previously, companies use policies to their advantage. One of the approaches is that they sell you a two year package, but then there’s no notification that the time’s up and the rate jumps up. And you can’t find just a low rate provider (I don’t even mind if it’s higher than the bonus deal). Everyone uses this practice. Sigh.

As I said, I can’t find anyone better, but just decided to change. That involved conversations, and research, and installation time, and turning off the old systems.  At least we’re getting a) a lower rate, b) nicer DVR, and c) faster internet.  For the time being. While the new provider promised to ping me before the plan runs out, the old provider says they can’t. See what I mean?  Regardless, I’ve got a trigger before it expires to sign up anew. Or change again.  That’s the lesson on this one.

And of course there are some conversations about some upcoming presentations. I was away last week presenting, and have one coming up next month (ATD China Summit, if you’re near Shanghai say hello) and several in November at AECT in Jacksonville.  You’ve seen some of the AI reflections, more likely to come on the new topics.

And there’s been some background work. Reading a couple of books, and working on two projects. Stay tuned for a couple of new things early next year.

The lesson, of course, is trying to find time to reflect while you’re executing on mundanities is more challenging, but still a valuable investment.  I fight to make time, I hope you do too!

26 September 2017

Organizational terms

Clark @ 8:09 AM

Listening to a talk last week led me to ponder the different terms for what it is I lobby for.  The goal is to make organizations accomplish their goals, and to continue to be able to do so.  In the course of my inquiry, I explored and uncovered several different ‘organizational’ terms.  I thought I should lay them out here for my (and your) thoughts.

For one, it seemed to be about organizational effectiveness. That is, the goal is to make organizations not just efficient, but capable of optimal levels of performance.  When you look at the Wikipedia definition, you find that they’re about “achieving the outcomes the organization intends to produce”.  They do this through alignment, increasing tradeoffs, and facilitating capacity building.  The definition also discusses improvements in decision making, learning, group work, and tapping into the strictures of self-organizing and adaptive systems, all of which sound right.

Interesting, most of the discussion seems to focus on not-for-profit organizations. While I agree on their importance, and have done considerable work with such organizations, I guess I’d like to see a broader focus. Also, and this is purely my subjective opinion, the newer thoughts seem grafted on, and the core still seems to be about producing good numbers. Any time you use the phrase ‘human capital’, I am leery.

Organizational engineering is a phrase that popped to mind (similar to learning engineering). Here, Wikipedia defines it as an offshoot of org development, with a focus on information processing. And, coming from cognitive psychology, that sounds good, with a caveat.  The reality is, we’re flawed as ideal thinkers. And in the definition it also talks about ‘styles’, which are a problem all on their own. Overall, this appears to be more a proprietary suite of approaches under a label. While it uses nice sounding terms, the reality (again, my inferences here) is that it may be designed for an audience that doesn’t exist.

The final candidate is organizational development. Here the definition touts “implementing effective change”. The field is defined as interdisciplinary and drawing on psych, sociology, and more.  In addition to systems thinking and and decision-making, there’s an emphasis on organizational learning and on coaching, so it appears more human-focused. The core values also talk about human beings being valued for themselves, not as resources, and looking at the complex picture.  Overall this approach resonates with me more, not just philosophically, but pragmatically.

As I look at what’s emerging from the scientific study of people and organizations, as summed up in a variety of books I’ve touted here, there are some very clear  lessons. For, one, people respond when you treat the as meaningful parts of a worthwhile endeavor. When you value people’s input and trust them to apply their talents to the goals, things get done. Caring enough to develop them in ways that are supportive, not punitive, and not just your goals but theirs’ too, retains their interest and commitment. And when you provide them with an environment to succeed and improve, you get the best organizational outcomes.

There’s more about how to get started.  Small steps, such as working in a small group (*cough* L&D? *cough* ;), and developing the practices and the infrastructure, then spreading, has been shown to be better than a top-down initiative. Experimenting and reviewing the outcomes, and continually tweaking likewise.  Ensuring that it’s coaching, not ‘managing’ (managers are the primary reason people leave companies).  Etc.

All this shouldn’t be a surprise, but it’s not trivial to do but takes persistence. And, it flies in the face of much of management and HR practices.  I don’t really care what we label it, I just want to find a way to talk about things that makes it easy for people to know what I’m talking about.  There are goals to achieve, so my main question is how do we get there?  Anyone want to get started?

15 September 2017

AI Reflections

Clark @ 8:07 AM

Last night I attended a session on “Our Relationship with AI” sponsored by the Computer History Museum and the Partnership on AI. In a panel format, noted journalist John Markoff moderated Apple’s Tom Gruber, AAAI President Subbarao Kambhampati, and IBM Distinguished Research Scientist Francesca Rossi. The overarching theme was: how are technologists, engineers, and organizations designing AI tools that enable people and devices to understand and work with each other?

It was an interesting session, with the conversation ranging from what AI is, to what it could and should be used for, and how to develop it in appropriate ways. Addresses were concerns about AI’s capability, roles, and potential misuses.  Here I’m presenting just a couple of thoughts triggered, as I’ve previously riffed on IA (Intelligence Augmentation) and Ethics.

One of the questions that arose was whether AI is engineering or science. The answer, of course, is both. There’s ongoing research on how to get AI to do meaningful things, which is the science part. Here we might see AI that can learn to play video games.  Applying what’s currently known to solve problems is the engineering part, like making chatbots that can answer customer service questions.

On a related note was what can AI do.  Put very simply, the proposal was that AI could do what you can make a judgment on in a second. So, whether what you see is a face, or whether a claim is likely to be fraudulent.  If you can provide a good (large) training set that says ‘here’s the input, and this is what the output should be’, you can train a system to do it.  Or, in a well-defined domain, you can say ‘here are the logical rules for how to proceed’, and build that system.

The ability to do these tasks, was another point, is what leads to fear. “Wow, they can be better than me at this task, how soon will they be better than me on many tasks?”  The important point made is that these systems can’t generalize beyond their data or rules.  They can’t say: ‘oh I played this video driving game so now I can drive a car’.

Which means that the goal of artificial general intelligence, that is, a system that can learn and reason about the real world, is still an unknown distance away.  It would either have to have a full set of  knowledge about the world, or you’d have to have both the capacity and the experience that a human learns from (starting as a baby).  Neither approach has demonstrated any approach of being close.

A side issue was that of the datasets.  It turns out that datasets can have or learn implicit biases. A case study was mentioned how Asian faces triggered ‘blinking’ warnings, owing to the typical eye shape. And this was from an Asian company!  Similarly, word recognition ended up biasing woman towards associations with kitchens and homes, compared to men.  This raises a big issue when it comes to making decisions: could loan-offerings, fraud-detection, or other applications of machine learning inherit bias from datasets?  And if so, how do we address it?

Similarly, one issue was that of trust. When do we trust an AI algorithm?  One suggestions was that it would come through experience (repeatedly seeing benevolent decisions or support).  Which wouldn’t be that unusual. We might also employ techniques that work with humans: authority of the providers, credentials, testimonials, etc. One of my concerns then was could that be misleading: we trust one algorithm, and then transfer that trust (inappropriately) to another?  That wouldn’t be  unknown in human behavior either.  Do we need a whole new set of behaviors around NPCs? (Non Player Characters, a reference to game agents that are programmed, not people.)

One analogy that was raised was to the industrial age. We started replacing people with machines. Did that mean a whole bunch of people were suddenly out of work?  Or did that mean new jobs emerged to be filled?  Or, since we’re now doing human-type tasks, will there be less tasks overall? And if so, what do we do about it?  It clearly should be a conscious decision.

It’s clear that there are business benefits to AI. The real question, and this isn’t unique to AI but happens with all technologies, is how we decide to incorporate the opportunities into our systems. So, what do you think are the issues?

 

7 September 2017

Developing L&D

Clark @ 8:05 AM

One of the conversations I’ve been having is how to shift organizations into modern workplace learning. These discussions have not been with L&D, but instead targeted directly at organizational strategy. The idea is to address a particular tactical goal as part of a strategic plan, and to do so in ways that both embody and develop learning and a collaboration culture. The topic was then raised about how you’d approach an L&D unit under this picture. And I wondered whether you’d use the same approach to developing L&D as part of L&D operations. The answer isn’t obvious.

So what I’m talking about here would be to take an L&D initiative, and do it in this new way, with coaching and scaffolding. The overall model involves a series of challenges with support.  You’re developing some new organizational capability, and you’d scaffold the process initially with some made up or pre-existing challenges.  Then you gradually move to real challenges. So, does this model change for L&D?

My thought was that you’d take an L&D initiative, and something out of the ordinary, an experiment.  Depending on the particular organization’s context, it might be performance support, or social media, or mobile, or…  Then you define an experiment, and start working on it. To develop the skills to execute, you give a team (or teams) some initial challenges: e.g. critique a design. Then more complex ones, so: design a solution to a problem someone else has solved. Finally, you give them the real task, and let them go (with support).

This isn’t slow; it’s done in sprints, and still fits in between other work. It can be done in a matter of weeks.  In doing so, you’re having the team collaborate with digital tools (even if/while working F2F, but ideally you have a distributed team). Ultimately, you are developing both their skills on the process itself and on working together in collaborative ways.

In talking this through, I think this makes sense for L&D as well, as long as it’s a new capability that’s being developed.  This is an approach that can rapidly develop new tactical skills and change to a culture oriented towards innovation: experimentation and iterative moves. This is the future, and yet it’s unlike most of the way L&D operates now.

Most importantly, I think, is that this opportunity is on the table now for a brief period. L&D can internally develop their understanding and ability of the new ways of working as a step towards being an organization-wide champion. The same approach taken within L&D then can be taken and used elsewhere. But it takes experience with this approach before you can scale it.  Are you ready to make the shift?

23 August 2017

Dual OS or Teams of Teams?

Clark @ 8:03 AM

I asked this question in the L&D Revolution LinkedIn group I have to support the Revolutionize L&D book, but thought I’d ask it here as well. And I’ve asked it before, but I have some new thoughts based upon thinking about McChrystal’s Team of Teams. Do we use a Dual Operating System (Dual OS), with hierarchy being used as a base to pull out teams for innovation, or do we go with a fully podular model?

In a Dual OS org, the hierarchy continues to exist for doing the work that is known that needs to be done. Kotter pulls out select members to create teams to attack particular innovation elements.  These teams change over time, so people are cycled back to work and new folks are infused with the innovation approach.

My question here is whether this really creates an entire culture of innovation. In both Keith Sawyer’s Group Genius and Stephen Johnson’s Where Do Good Ideas Come From, real innovation bubbles along, requiring time and serendipity. You can get innovative solutions for known problems from teams, but for new insights you need an ongoing environment for ideas to emerge, collide, percolate/incubate/ferment.  How do you get that going across the organization?

On the other hand, looking at the military, there’s a huge personnel development infrastructure that prepares people to be members of the elite teams. Individuals from these teams intermix to get the needed adaptivity, but it’s based upon a fixed foundation. And there are still many hierarchical mechanisms organized to support the elite work.  So is it really a fully teamed approach?

As I write this, it sounds like you do need the Dual OS, and I’m willing to believe it.  My continuing concern again is what fosters the ongoing innovation?  Can you have an innovative hierarchy as well? Can you have a hierarchy with a culture of experimentation, accepting mistakes, etc? How do the small innovations in operating process occur along with the major strategic shifts?  My intuitions go towards creating teams of teams, but completely. I do believe everyone’s capable of innovation, and in the right atmosphere that can happen. I don’t think it’s separate, I believe it has to be intrinsic and ubiquitous.  The question is, what structure achieves this?  And I haven’t seen the answer yet.  Have you?  Perhaps we still have some experimentation to do ;).

15 August 2017

Innovative Work Spaces

Clark @ 8:09 AM

working togetherI recently read that Apple’s new office plan is receiving bad press. This surprises me, given that Apple usually has their handle on the latest in ideas.  Yet, upon investigation, it’s clear that they appear to not be particularly innovative in their approach to work spaces.  Here’s why.

The report I saw says that Apple is intending to use an open office plan. This is where all the tables are out in the open, or at best there are cubicles. The perceived benefits are open communication.  And this is plausible when folks like Stan McChrystal in Team of Teams are arguing for ‘radical transparency’.  The thought is that everyone will know what’s going on and it will streamline communication. Coupled with delegation, this should yield innovation, at the expense of some efficiency.

However, research hasn’t backed that up. Open space office plans can even drive folks away, as Apple’s hearing. When you want to engage with your colleagues and stay on top of what they’re doing, it’s good.  However, the lack of privacy means folks can’t focus when they’re doing heavy mental work. While it sounds good in theory, it doesn’t work in practice.

When I was keynoting at the Learning@Work conference in Sydney back in 2015, a major topic was about flexible work spaces. The concept here is to have a mix of office types: some open plan, some private offices, some small conference rooms. The view is that you take the type of space you need when you need it. Nothing’s fixed, so you travel with your laptop from place to place, but you can have the type of environment you need. Time alone, time with colleagues, time collaborating. And this was being touted both on principled and practical grounds with positive outcomes.

(Note that in McChrystal’s view, you needed to break down silos. He would strategically insert a person from one area with others, and have representatives engaged around all activities.  So even in the open space you’d want people mixed up, but most folks still tend to put groups together. Which undermines the principle.)

As Jay Cross let us know in his landmark Informal Learningeven the design of workspaces can facilitate innovation. Jay cited practices like having informal spaces to converse, and putting the mail room and coffee room together to facilitate casual conversation.  Where you work matters as well as how, and open plan has upsides but also downsides that can be mitigated.

Innovation is about culture, practices, beliefs, and  technology.  Putting it all together in a practical approach takes time and knowledge to figure out where to start, and how to scale.  As Sutton and Rao tell us, it’s a ground war, but the benefits are not just desirable, but increasingly necessary. Innovation is the key to transcending survival to thrival. Are you ready to (Qu)innovate?

3 August 2017

My policies

Clark @ 8:04 AM

Like most of you, I get a lot of requests for a lot of things. Too many, really. So I’ve had to put in policies to be able to cope.  I like to provide a response (I feel it’s important to communicate the underlying rationale), so I have stock blurbs that I cut and paste (with an occasional edit for a specific context).  I don’t want to repeat them here, but instead I want to be clear about why certain types of actions are going to get certain types of response. Consider this a public service announcement.

So, I get a lot of requests to link on LinkedIn, and I’m happy to, with a caveat. First, you should have some clear relationship to learning technology. Or be willing to explain why you want to link. I use LinkedIn for business connections, so I’m linked to lots of people I don’t even know, but they’re in our field.

I ask those not in learntech why they want to link. Some do respond, and often have a real reason (shifting to this field, their title masks a real role), and I’m glad I asked.  Other times it’s the ‘Nigerian Prince’ or equivalent. And those will get reported. Recently, it’s new folk who claim they just want to connect to someone with experience. Er, no.  Read this blog, instead. I also have a special message to those in learntech with biz dev/sales/etc roles; I’ll link, but if they pitch me, they’ll get summarily unlinked (and I do).

And I likely won’t link to you on Facebook.  That’s personal. Friends and family. Try LinkedIn instead.

I get lots of emails, particularly from elearning or tech development firms, offering to have a conversation about their services.  I’m sorry, but don’t you realize, with all the time I’ve been in the field, that I have ‘goto’ partners? And I don’t do biz dev: develop contracts and outsource production. As Donald H Taylor so aptly puts it, you haven’t established a sufficient relationship to justify offering me anything.

Then, I get email with announcements of new moves and the like.  Apparently, with an expectation that I’ll blog it.  WTH?  Somehow, people think this blog is for PR.  No, as it says quite clearly at the top of the page, this is for my learnings about learning.  I let them know that I pay attention to what comes through my social media channels, not what comes unsolicited.  I also ask what list they got my name from, so I can squelch it. And sometimes they have!

I used to get a lot of offers to either receive or write blog posts. (This had died down, but has resurrected recently.)  For marketing links, obviously. I don’t want your posts; see the above: my learnings!  And I won’t write for you for free. Hey, that’s a service.  See below.

And I get calls with folks offering me a place at their event.  They’re pretty easy to detect: they ask about would I like to have access to a specific audience,…  I have learned to quickly ask if it’s a pay to play.  It always is, and I have to explain that that’s not how I market myself.  Maybe I’m wrong, but I see that working for big firms with trained sales folks, not me. I already have my marketing channels. And I speak and write as a service!

I similarly get a lot of emails that let me know about a new product and invite me to view it and give my opinion.  NO!  First, I could spend my whole day with these. Second, and more importantly, my opinion is valuable!  It’s the basis of 35+ years of work at the cutting edge of learning and technology. And you want it for free?  As if.  Let’s talk some real evaluation, as an engagement.  I’ve done that, and can for you.

As I’ve explained many times, my principles are simple: I talk ideas for free; I help someone personally for drinks/dinner; if someone’s making a quid, I get a cut.  And everyone seems fine with that, once I explain it. I occasionally get taken advantage of, but I try to make it only once for each way (fool me…).   But the number of people who seem to think that I should speak/write/consult for free continues to boggle my mind.  Exposure?  I think you’re overvaluing your platform.

Look, I think there’s sufficient evidence that I’m very good at what I do. If you want to refine your learning design processes, take your L&D strategy into the 21st century, and generally align what you do with how we think, work, and learn, let’s talk.  Let’s see if there’s a viable benefit to you that’s a fair return for me. Lots of folks have found that to be the case.  I’ll even offer the first conversation free, but let’s make sure there’s a clear two-way relationship on the table and explore it.  Fair enough?

 

 

2 August 2017

Ethics and AI

Clark @ 8:03 AM

I had the opportunity to attend a special event pondering the ethical issues that surround Artificial Intelligence (AI).  Hosted by the Institute for the Future, we gathered in groups beforehand to generate questions that were used in a subsequent session. Vint Cerf, co-developer of the TCP/IP protocol that enabled the internet, currently at Google, responded to the questions.  Quite the heady experience!

The questions were quite varied. Our group looked at Values and Responsibilities. I asked whether that was for the developers or the AI itself. Our conclusion was that it had to be the developers first. We also considered what else has been done in technology ethics (e.g. diseases, nuclear weapons), and what is unique to AI.  A respondent mentioned an EU initiative to register all internet AIs; I didn’t have the chance to ask about policing and consequences.  Those strike me as concomitant issues!

One of the unique areas was ‘agency’, the ability for AI to act.  This led to a discussion for a need to have oversight on AI decisions. However, I suggested that humans, if the AI was mostly right, would fatigue. So we pondered: could an AI monitor another AI?  I also thought that there’s evidence that consciousness is emergent, and so we’d need to keep the AIs from communicating. It was pointed out that the genie is already out of the bottle, with chatbots online. Vint suggests that our brain is layered pattern-matchers, so maybe consciousness is just the topmost layer.

One recourse is transparency, but it needs to be rigorous. Blockchain’s distributed transparency could be a model. Of course, one of the problems is that we can’t even explain our own cognition in all instances (we make stories that don’t always correlate with the evidence of what we do). And with machine learning, we may be making stories about what the system is using to analyze behaviors and make decisions, but it may not correlate.

Similarly, machine learning is very dependent on the training set. If we don’t pick the right inputs, we might miss some factors that would be important to incorporate in making answers.  Even if we have the right inputs, but don’t have a good training set of good and bad outcomes, we get biased decisions. It’s been said that what people are good at is crossing the silos, whereas the machines tend to be good in narrow domains. This is another argument for oversight.

The notion of agency also brought up the issue of decisions.  Vint inquired why we were so lazy in making decisions. He argued that we’re making systems we no longer understand!  I didn’t get the chance to answer that decision-making is cognitively taxing.  As a consequence, we often work to avoid it.  Moreover, some of us are interested in X, so are willing to invest the effort to learn it, while others are interested in Y. So it may not be reasonable to expect everyone to invest in every decision.  Also, our lives get more complex; when I grew up, you just had phone and TV, now you need to worry about internet, and cable, and mobile carriers, and smart homes, and…  So it’s not hard to see why we want to abrogate responsibility when we can!  But when can we, and when do we need to be careful?

Of course, one of the issues is about AI taking jobs.  Cerf stated that nnovation takes jobs, and generates jobs as well. However, the problem is that those who lose the jobs aren’t necessarily capable of taking the new ones.  Which brought up an increasing need for learning to learn, as the key ability for people. Which I support, of course.

The overall problem is that there isn’t a central agreement on what ethics a system should embody, if we could do it. We currently have different cultures with different values. Could we find agreement when some might have different view of what, say, acceptable surveillance would be? Is there some core set of values that are required for a society to ‘get along’?  However, that might vary by society.

At the end, there were two takeaways.  For one, the question is whether AI can helps us help ourselves!  And the recommendation is that we should continue to reflect and share our thoughts. This is my contribution.

27 July 2017

Barry Downes #Realities360 Keynote Mindmap

Clark @ 9:59 AM

Barry Downes talked about the future of the VR market with an interesting exploration of the Immersive platform. Taking us through the Apollo 11 product, he showed what went into it and the emotional impact. He showed a video that talked (somewhat simplistically) about how VR environments could be used for learning. (There is great potential, but it’s not about content.). He finished with an interesting quote about how VR would be able to incorporate any further media. A second part of the quote said: “Kids will think it’s funny [we] used to stare at glowing rectangles hoping to suspend disbelief.”

VR Keynote

25 July 2017

What is the Future of Work?

Clark @ 8:07 AM

which is it?Just what is the Future of Work about? Is it about new technology, or is it about how we work with people?  We’re seeing amazing new technologies: collaboration platforms, analytics, and deep learning. We’re also hearing about new work practices such as teams, working (or reflecting) out loud, and more.  Which is it? And/or how do they relate?

It’s very clear technology is changing the way we work. We now work digitally, communicating and collaborating.  But there’re more fundamental transitions happening. We’re integrating data across silos, and mining that data for new insights. We can consolidate platforms into single digital environments, facilitating the work.  And we’re getting smart systems that do things our brains quite literally can’t, whether it’s complex calculations or reliable rote execution at scale. Plus we have technology-augmented design and prototyping tools that are shortening the time to develop and test ideas. It’s a whole new world.

Similarly, we’re seeing a growing understanding of work practices that lead to new outcomes. We’re finding out that people work better when we create environments that are psychologically safe, when we tap into diversity, when we are open to new ideas, and when we have time for reflection. We find that working in teams, sharing and annotating our work, and developing learning and personal knowledge mastery skills all contribute. And we even have new  practices such as agile and design thinking that bring us closer to the actual problem.  In short, we’re aligning practices more closely with how we think, work, and learn.

Thus, either could be seen as ‘the Future of Work’.  Which is it?  Is there a reconciliation?  There’s a useful way to think about it that answers the question.  What if we do either without the other?

If we use the new technologies in old ways, we’ll get incremental improvements.  Command and control, silos, and transaction-based management can be supported, and even improved, but will still limit the possibilities. We can track closer.  But we’re not going to be fundamentally transformative.

On the other hand, if we change the work practices, creating an environment where trust allows both safety and accountability, we can get improvements whether we use technology or not. People have the capability to work together using old technology.  You won’t get the benefits of some of the improvements, but you’ll get a fundamentally different level of engagement and outcomes than with an old approach.

Together, of course, is where we really want to be. Technology can have a transformative amplification to those practices. Together, as they say, the whole is greater than the some of the parts.

I’ve argued that using new technologies like virtual reality and adaptive learning only make sense after you first implement good design (otherwise you’re putting lipstick on a pig, as the saying goes).  The same is true here. Implementing radical new technologies on top of old practices that don’t reflect what we know about people, is a recipe for stagnation.  Thus, to me, the Future of Work starts with practices that align with how we think, work, and learn, and are augmented with technology, not the other way around.  Does that make sense to you?

Next Page »

Powered by WordPress