Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Archives for 2021

Levels of Organizational Alignment

19 October 2021 by Clark Leave a Comment

Several years ago, I was pushing the notion of the Coherent Organization. While I still feel it’s relevant, perhaps the time wasn’t right or I wasn’t convincing enough. However, as I continue to consider the issue of alignment of what we do in L&D (and organizational) practices, I realize there’s more. One way, then, to think about the coherent organization is as achieving levels of organizational alignment.

Starting from the top, I think of the alignment with the organization and society. Normally, and probably most importantly for survival, organizations need to think about alignment with their market. (In appropriate ways; I’m reminded how the freight business got upended when companies thought they were in the train business and not the transportation business.) However, there  is a level above the market, and that is whether the org is serving the market in a society-appropriate way. For instance, if you’re helping your customers rip off their clients, it may be lucrative but it’s not a scrutable way to do business. I like the notion of benefit corporations  (though they may not go far enough). Don’t do well by doing ill.

Which is the next level of alignment, of employees with the organization’s mission. They’ll be more engaged if that mission is appropriate!  Further, I like the notion of ’employee experience’. I’ve heard it said that you can’t have a good customer experience if you don’t have a good employee experience. That’s plausible. I think Dan Pink’s  Drive says it well, you want your employees to have Mastery, Autonomy, and Purpose. Which means having a clear raison d’être, goals and the freedom to pursue them, and support to succeed.

Accompanying that is a workplace culture that’s supportive of success. I like Jerry Michalski’s focus on trust; start from there. Then have transparency, e.g. ‘show your work’ and ‘learn out loud‘. I’m also a fan of the Learning Organization Dimensions of Garvin, Edmondson, & Gino. I like how Amy Edmondson has gone on to advocate for including both safety and accountability as complementary components of success.

Of course, this carries down to the individual level. For instance, including a focus on having performers prepared up front and developed over time. This includes a shift to coaching and mentoring, as well as learning experience design grounded in the sciences of learning and engagement.  Going further, we should havie people not just knowing their purpose but getting feedback on how they‘re doing to achieve it. Recognition matters, with positive recognition of accomplishment or support to improve. Against an objective metric, of course, not comparative to others.

There’s more, but most importantly, it’s aligning all these from bottom to top. For instance, you could be creating a great culture to serve a bad purpose. Alternatively, you could have a great purpose but use industrial era methods to get there. I have to admit that, having served in orgs of various sizes, and seen the pockets of inefficiency that can emerge, I wonder how any business makes any money! Still, there’s evidence that the better you’re aligned, the better you do. (See the Toward Maturity Top Deck results or Laurie Bassi’s work on the link between people approaches and org success.

Achieving success at all  levels of organizational alignment is a path to success. No one’s saying it’s easy, but it  is doable. Further, it’s your best investment in the future. Just as with designing learning, get the core right before you add shiny objects, the same is true for organizations. There’s a transformation in practices to be done before you then apply the digital transformation. However, once you align these, as well, you’re on an upward path. Shall we?

By the way, this is aligned :) with the theme of what I’ll be  talking about in my opening keynote for the ATD Japan Summit.  

Higher-education Myths

12 October 2021 by Clark 4 Comments

I’ve been in a variety of higher education roles in several ways: as a victim, er, student; as a grad student; post-doc; tenured/promoted faculty member; textbook publishing consulting;  strategic elearning consulting… Further, in general, I’m a supporter. I do have quibbles, and one is the persistence of learning myths. Trust me, I wrote a whole book on what the research says about them! In addition to having talked about org learning myths, let me explore some elements of higher-education myths.

I saw an article in the top education news source in the country, The Chronicle of Higher Education. I get their daily newsletter, just to keep my finger on the pulse. However, this article was touting issues for Gen Z students. Yet, research says that the ‘generations’ framework isn’t valid. There’s no reliable data that generations is a viable discrimination. In fact, it literally  is discrimination (in terms of using arbitrary distinctions to label people.

This is only part of the broader problem. A colleague regularly chides his alma mater for continuing to believe in learning styles. This, too, is a myth! While learners do differ, there’s no evidence we should adapt learning to learning styles, let alone can we reliably identify them. It’s appealing, but wrong. Not that it isn’t also prevalent amongst K12 teachers as well.

Which is related to another problem, business school curricula. I was surprised, and dismayed, to find that a prominent business school has personality instruments as part of it’s curricula! This includes MBTI, which is discredited both theoretically and empirically. Other such instruments, also with flaws, continue to be indicated. I’m sure there may be some financial motivation as well. (E.g. like Apple & Microsoft did offering huge discounts to schools, to get new users used to their experience.)

We should not tolerate learning myths in university. Aren’t these bastions of science? Ok, that’s another myth, that universities aren’t riven with politics, but that’s not the focus here. Still, universities should be better at rejecting myths, just as they should also be better about using the best pedagogies. Which they also aren’t doing, by and large ;).

There are more myths about universities, and issues like what their role in society  should be. That’s not what I’m talking about here, though. For all their other issues, they should  not be perpetuators of higher-education myths. (Here’s hoping they’re also not guilty of the ‘attention span of a goldfish’ myth!)

The (Post) Cognitive Perspective

5 October 2021 by Clark 5 Comments

I’m deeply steeped in the cognitive sciences, owing to a Ph.D. in cognitive psych. Fortuitively, this was at the time my advisor was creating the cognitive science program (and more). So I’ve a bias. Yet I also have a fair bit of empirical evidence that taking a cognitive perspective accomplishes things that are hard to do in other ways. So let me make the case that the cognitive perspective is more than just a useful one, but arguably a necessary one.

I‘ll start by reflecting back on something I wrote before, about virtual world affordances. At the time, platforms like Second Life were touting the advantages of an immersive navigable world. Of course, the promises were all-encompassing: everything would move to virtual worlds. In retrospect, it didn‘t eventuate. Why? I argue it’s because the cognitive overhead of virtual worlds means that there has to be a sustained value proposition, and that came from when you truly need 3D immersion and social.  

Similarly, when I wrote my books on games and mobile, I focused on the cognitive impacts. The first reason was because technology was changing so fast that anything hardware-specific would be out of date before the book was published. The second is because our brains don‘t change that fast, so what works will work regardless of the technology .  

Note that our understanding of cognition has changed. We‘re now in a ‘post-cognitive‘ era, where the notion that all our formal, logical thinking is done in our heads is wrong. Research is showing that we‘re far more ‘situated‘ than we think, and distributed as well. That includes distributed across external representations and other people! It’s very contextual, and it’s not all in our heads!

So these days, when I look at things, I try to look with a cognitive (ok, post-cognitive) perspective. I look to see how things align, or not, with how our brains work. When I evaluate learning technologies, for instance, I look to see how well they do things like provide meaningful practice: active and contextualized. You can also see when particular technologies (e.g. VR/AR/AI) will be valuable, and not. Similarly, when I look at workplace change proposals, I look at how well they reflect our mechanisms for adapting to change.  

I‘ll argue that these perspectives are valuable. You can quickly see why most training doesn‘t work, cut through hype from vendors, create explanations about why myths are mythtaken, etc. You can save money, be more effective, etc when you align with how our brains work. I‘ve talked before about how there are gaps. This is the flip side, how to avoid those gaps, and do better.   In short, you‘re better able to assist your organization in being more effective (and efficient).  

That‘s why I‘m pleased that I am able to put these basics into the learning science book, and workshops. It‘s possible to get better at this sort of perspective. It‘s also possible to get it on tap as needed. However, it does take both the cognitive understanding and the experience in applying it. So, how‘s your cognitive perspective?

On a side note, I want to encourage you to consider my workshop at DevLearn on Make It Meaningful, a full day exploring how we make learning experiences deeply engaging (adding to effectiveness). This is also the topic of my online workshop through the Learning Development Accelerator. This is, to me, the most important topic to  complement  learning science. (Available as a book and workshop. ;) In both cases, I’m trying  to help us  stop making boring courses that people want to avoid, and suggest that this  can be done for most any topic. It also leads to more effective learning outcomes! Hope to see you at one! (Of course, if your organization would like your own private version, let me know!)

Meta-ethics of learning design

28 September 2021 by Clark 1 Comment

I‘ve addressed ethics elsewhere, but I‘m looking at it a different way now. I’m thinking from the perspective of situated cognition, and recognizing that there are certain things we can do. For better or worse. Further, these choices have ramifications beyond the initial impact. I think we need to be aware of the possibilities, and then consider the meta-ethics of learning design.

My particular concern sparks from the notion of how we are contextually sensitive. To make this concrete, think of the research by Beth Loftus. I”ll characterize a whole suite of research with a simply paraphrased experiment. So, she had folks watch a video of an accident. Then, she prompted recall of the amount of damage they witnessed. For one, she just asked them. For another group, the recall was prompted by “Besides the broken glass,…”. The latter group recalled worse damage. And. There. Was. No. Broken. Glass!  

The point here is that context can influence our thinking and memory. Which is what I see worry about with videos. They can take a statement as if it‘s fact, and then continue on with that as received wisdom. It‘s a classic cognitive approach, making a statement as if it‘s assumed.The bad part is that there‘s a narrative flow, and it‘s hard to stop and reflect. Versus, say, reading.  

On the other hand, of course, we can build in reflection time. Sure, learners can use the pause button, but there are times they may not. For instance, if the learner is following confirmation bias, that is looking at things that align with what you believe.  

We have the option to use coercive techniques for good, of course. However, we can also choose to use legitimate presentation techniques. I believe we should. Part of the development of metacognition is to see the pedagogy being used and internalizing it. If our pedagogy is visible (as it should be), it needs to be scrutable if we want our learners to adopt appropriate skills.  

Our designs, and our meta-designs, need to be ethically designed, both to effectively achieve our ends, and to develop our learners. We need to support the meta-ethics of learning design, as well as the ethics themselves.  

Complexity in Learning Design

21 September 2021 by Clark Leave a Comment

a fractalI recently mentioned that one of the problems with research is that things are more interconnected than we think. This is particularly true with cognitive research. While we can make distinctions that simplify things in useful ways (e.g. the human information processing system model*), the underlying picture is of a more interactive system.  Which underpins why it makes sense to talk about Learning Experience Design (LXD) and not just instructional design. We need to accommodate complexity in learning design.  (* Which I talk about in Chapter 2 of my learning science book, and in my workshops on the same topic through the Allen Academy.)

We’re recognizing that the our cognition is more than just in our head. Marcia Conner, in her book  Learn More Now  mentioned how neuropeptides passed information around the body. Similarly, Annie Murphy Paul’s  The Extended Mind talks about moving cognition (and learning) into the world. In my Make It Meaningful workshops (online or F2F at DevLearn 19 Oct), I focus on how to address the emotional component of learning. In short, learning is about more than just information dump and knowledge test.

Scientifically, we’re finding there are lots of complex interactions between the current context, our prior experience, and our cognitive architecture. We’re much more ‘situated’ in the moment than the rational beings we want to believe. Behavioral economics and Daniel Kahneman’s research have made this abundantly clear. We try to avoid the hard mental work using shortcuts that work sometimes, but not others. (Understanding when is an important component of this).

We get good traction from learning science and instructional design approaches, for sure. There are good prescriptions (that we often ignore, for reasons above) about what to do and how. So, we should follow them. However, we need more. Which is why I tout LXD  Strategy! We need to account for complexity in learning design approaches.

For one, our design processes need to be iterative. We’ll make our best first guess, but it won’t be right, and we’ll need to tune. The incorporation of agile approaches, whether SAM or LLAMA or even just iterative ADDIE, reflects this. We need to evaluate and refine our designs to match the fact that our audience is more complex than we thought.

Our design also needs to think about the emotional experience as well as the cognitive experience. We want our design processes to systematically incorporate humor, safety, motivation, and more. Have we tuned the challenge enough, and how will we know?  Have we appropriately incorporated story? Are our graphics aligned or adding to cognitive load? There are lots of elements that factor in.

Our design process has to accommodate SMEs who literally can’t access what they do. Also learner interests, not just knowledge. We need to know what interim deliverables, processes for evaluation, times when we shouldn’t be working solo, and tools we need. Most importantly, we have to do this in a practical way, under real-world resource constraints.

Which is why we need to address this strategically. Too many design processes are carry-over from industrial approaches: one person, one tool, and a waterfall process. We need to do better. There’s complexity in learning design, both on the part of our learners, and ourselves as designers. Leveraging what we know about cognitive science can provide us with structures and approaches that accommodate these factors. That’s only true, however, if we are aware and actively address it. I’m happy to help, but can only do so if you reach out. (You know how to find me. ;) Here’s to effective and engaging  learning!

Reading Research?

14 September 2021 by Clark Leave a Comment

I was honored to have a colleague laud my Myths book (she was kind enough to also promote the newer learning science book), but it was something she said that I found intriguing. She suggested that one of the things in it includes “discussing how to read research”. And it occurs to me that it’s worth unpacking the situation a wee bit more. So here’s a discussion about how we (properly) develop learning science that informs us in reading research.

Caveat: I  haven’t been an active researcher for decades,  serving instead to interpret and apply the  research, but it’s easier to say ‘we’ than “scientists”, etc.  

Generally, theory drives research. You’ve created an explanation that accounts for observed phenomena better than previous approaches. What you do then is extend it to other predictions, and test them.  Occasionally, we do purely exploratory studies just to see what emerges, but mostly we generate hypotheses and test them.

We do this with some rigor. We try to ensure that the method we devise removes confounding variables, and then we use statistical analysis to remove the effects of other factors. For instance, I created a convoluted balancing approach to remove order effects in my Ph.D. research. (So complicated that I had to analyze a factor or two first, to ensure it wasn’t a factor, so I could remove it from the resulting analysis!). We also try to select relevant subjects, design uncontaminated materials, and carefully control our analysis. Understanding the ways in which we do this requires an ability to know about experiment design, which isn’t common knowledge.

Moreover, we then need to share this with our colleagues so that they can review what we’ve done. We need to do it in unambiguous language, using the specific vocabulary of our field. And we need to make it scrutable. Thus, we publish in peer-reviewed journals which mean others have looked at our work and deemed it acceptable. However, the language is deliberately passive, unemotional, and precise, as well as focused on a very narrow topic. Thus, it’s not a lot of fun to read unless you  really care about the topic!

There are problems with this. Increasingly, we’re finding that trying to isolate independent variables doesn’t reflect the inherent interactions. Our brains actually have a lot of complexity that hinder simple explanations. We’ve also found that it’s difficult to get representative subjects, when what’s easy to get are higher education students in the developed world. There are also politics involved, sad to say, so that it can be hard for new ideas to emerge if they challenge the entrenched views. Yet, it’s still the best approach we have. The scientific method has led to more advances in understanding than anything else!

There are things to worry about as a consumer of science. For one, there are people who fake results. They’re few, of course. There’s also research that’s kept proprietary, for financial reasons. Or is commissioned. As soon as there’s money involved, there’s the opportunity for corruption (think: tobacco, and sugar). Companies may have something that they tout as valid, but the research base isn’t publically available. Caveat emptor!

Thus, being able to successfully read research isn’t for everyone. You need to be able to comprehend the studies, and know when to be wary. The easy thing to do is to look for translations, and translators, who have demonstrated a trustworthy ability to help sort out the wheat from the chaff. They exist.

I hope this illustrates what reading research requires. You can take some preliminary steps: give it the ‘sniff’ test, see if it applies to you, and see who’s telling you this (and if anyone else is agreeing or saying to the contrary) and what their stake in the game is. If these steps don’t answer a question, however, maybe you want to look for good guidance. Make sense?

 

Coping with Change: A Book Review of Flux by April Rinne

9 September 2021 by Clark 1 Comment

How do we cope with change? There’s a myth that we resist change, but Peter de Jaeger busted that in a talk I heard where he pointed out that we make changes all the time. We get married, take a different job, have kids, all of which are changes. The difference is that these are changes we choose! However, in this era of increasing change, we’re likely going to face more and more changes we didn’t expect. Can we improve our ability for coping with change? Yes, says April Rinne in her book  Flux: 8 Superpowers for Thriving in Constant Change.

And  here’s a caveat: I am part of a  group she put together to talk about Flux while writing the book. I’m in the acknowledgements.

April, faced with a heavy unchosen change in her teens, carried that with her. It’s driven her interest in change and how we can learn to cope.  Given that we’re in an era of increasing change, she recognized that we would benefit from having some approaches to improve our reslience.  She looked at a wide variety of inputs, and has distilled her learnings into 8 mental frameworks that assist.

The underlying focus is on a flux  mindset, that is, a stance that change is coming and to be accepting, not resisting. The eight different ways of looking at the world are deliberately provocative, but also apt:

  • Run Slower
  • See What’s Invisible
  • Get Lost
  • Start with Trust
  • Know Your ‘Enough’
  • Create Your Portfolio Career
  • Be All the More Human
  • Let Go of the Future

Each gets a chapter, with illustrations of the challenge, and practical ways to enact. You may find, like I did, that some are familiar, others are more challenging. Each comes from either or both of ancient wisdom  and practical experience. The ones that were new I find to be all the more interesting. And useful!

That’s the real key. It’s very much aligned with what we know about how our brains work (a big issue with me, as this audience has probably learned ;). Some areas I feel like I’ve a handle on (e.g. run slower), and others are things are more challenging (e.g. see what’s invisible). There are bound to be areas of work for you. The upside of that work, however, is likely to be a better ability to ‘be’.

This is a book that you’ll want your loved ones to read, because what it provides aligns with a view of the world as it could and should be. It’s a guide for coping with change that addresses not only individuals, but organizations and society as a whole.  Highly recommended.

Iterating and evaluating

7 September 2021 by Clark Leave a Comment

Design cycleI’ve argued before about the need for evaluation in our work. This occurs summatively, where we’re looking beyond smile sheets to actually determine the impact of our efforts. However, it also should work formatively, where we’re seeing if we’re getting closer. Yet there are some ways in which we go off track. So I want to talk about iterating and evaluating our learning initiatives.

Let’s start by talking about our design processes. The 800 lb gorilla of ADDIE has shifted from a water flow model to a more iterative approach. Yet it still brings baggage. Of late, more agile and iterative approaches have emerged, not least Michael Allen’s SAM and Megan Torrance’s LLAMA. Agile approaches, where we’re exploring, make more sense when designing for people, with their inherent complexity.

Agile approaches work on the basis of creating, basically, Minimum Viable Products, and then iterating.  We evaluate each iteration. That is, we check to see what need to be improved, and what is good enough. However,  when are we done?

In my workshops, when talking about iteration, I like to ask the audience this question. Frequently, the answer is “when we run out of time and money”. That’s an understandable answer, but I maintain it’s the  wrong answer.

If we iterate until we run out of time and money, we don’t know that we’ve actually met our goals. As I explained about social media metrics, but applies here too, you  should be iterating until you achieve the metrics you’ve set. That means you know what you’re trying to do!

Which requires, of course, that you set metrics about what your solution should achieve. That could include usability and engagement (which come before and after, respectively), but most critically ‘impact’. Is this learning initiative solving the problem we’re designing it to achieve?  Which also means you need to have a discussion of why you’re building it, and how you know it’s working.

Of course, if you’re running out of time and money faster than you’re getting close to your goal, you have to decide whether to relax your standards, or apply for more resources, or abandon your work, or…but at least you’re doing so consciously. Yet this is still better than heuristically determining that three iterations is arbitrarily appropriate, for example.

I do recognize that this isn’t our current situation, and changing it isn’t easy. We’re still asked to make slide decks look good, or create a course on X, etc. Ultimately, however, our professionalism will ask us to do better. Be ready. Eventually, your CFO should care about the return on your expenditures, and it’ll be nice to have a real answer. So, iterating and evaluating  should  be your long term approach. Right?

Making it Meaningful

31 August 2021 by Clark 1 Comment

I volunteer for our local Community Emergency Response Team (CERT; and have learned lots of worthwhile things). On a call, our local organizer mentioned that she was leading a section of the train-the-trainers upcoming event, and was dreading trying to make it interesting. Of course I opened my big yap and said that’s something I’m focusing on, and offered to help. She took me up on it, and it was a nice case study in making it meaningful.

Now, I have a claim that you can’t give me a topic that I can’t create a game for. I’m now modifying that to ‘you can’t give me a topic I can’t make meaningful’.  She’d mentioned her topic was emergency preparedness, and while she thought it was a dull topic, I was convinced we could do it. I mentioned that the key was making it visceral.

I had personal experience; last summer our neighbor was spreading the rumor that we were going to have to evacuate owing to a fire over the ridge. (Turns out, my neighbor was wrong.) I started running around gathering sleeping bags, coats, dog crate, etc. Clearly, I was thinking about shelter. When I texted m’lady, she asked about passports, birth certificates, etc. Doh!

However, even without that personal example, there’s a clear hook. When I mentioned that, she mentioned that when you’re in a panic, your brain shuts down some and it’s really critical to be prepared. However, she mentioned that someone else was taking that bit, and her real topic was different types of disasters. Yet my example had already got her thinking, and she started talking about different people being familiar with an earthquake (here in California).

I thought of how when talking with scattered colleagues, they disclaim about how earthquakes are scary, and I remind them that  every place has its hazards. In the midwest it could be tornados or floods. On the east coast it’s hurricanes. Etc. The point being that everyone has some experience. Tapping into that, talking about consequences, is a great hook.

That’s the point, really. To get people willing to invest in learning, you have to help people see that they  do need it. (Also, that they don’t know it now,  and that this experience will change that.). You need to be engaged in making it meaningful!

Again, in my mind learning experience design (LXD) is about the elegant integration of learning science with engagement. You need to understand both. I’ve got a book and a workshop on learning science, and I’ve a workshop at DevLearn on the engagement side. I’ve also got a forthcoming book and an online workshop coming for more on engagement. Stay tuned!

More lessons from bad design

24 August 2021 by Clark 2 Comments

I probably seem like a crank, given the way I take things apart. Yet, I maintain there’s a reason beyond “get off my lawn!” I point out flaws not to complain, but instead to point to how to do it better. (At least, that’s my story and I’m sticking to it. ;) Here’s another example, providing more lessons from bad design.

In this case, I’ll be attending a conference and the providers have developed an application to support attendees. In general, I look forward to these applications. They provide ways to see who’s attending, and peruse sessions to set your calendar. There are also ways to connect to people. However, two major flaws undermine this particular instance.

The first issue is speed. This application is  slow! I timed it; 4 seconds to open the list of speakers or attendees. Similarly, I clicked on a letter to jump through the list of attendees. The amount of time it takes varied from 4 to 8 seconds. Jumping to the program took 6 seconds.

While that may seem short, compare that to most response times in apps. You essentially can’t time them, they’re so fast. More than a second is an era in mobile responsiveness. I suspect that this app is written as a ‘wrapped’ website, not a dedicated app. Which works sometimes, but not when the database is too big to be responsive. Or it could just be bad coding. Regardless, this is  basically unusable. So test the responsiveness before it’s distributed to make sure it’s acceptable. (And then reengineer it when it isn’t.)

That alone would be sufficient to discount this app, but there’s a second problem. Presumably for revenue reasons, there are ads that scroll across the top. Which might make sense to keep the costs of the app down, but there’s a fundamental problem with our visual architecture.

Motion in the periphery of our vision is distracting. That was evolutionarily adaptive, allowing us to detect threats from places that we weren’t focusing on. Yet, when it’s not a threat, and we  are trying to focus on something, it interferes. We learned about this in the days of web pages with animated gifs: you couldn’t process what you were there to consume!

In this app, the scrolling of the ads makes it more difficult to read the schedule, attendee lists, and other information. Thus, the whole purpose of the application is undermined. You could have static ads that are randomly attached to the pages you click on. The audience is likely to go to several pages, so all the ads will get seen. Having them move, however, to ensure that you see them all undermines the whole purpose of the app.

Oddly enough, there are other usability problems here. On the schedule, there’s a quick jump to times on a particular day. Though it stops at 2PM!?!? (The conference extends beyond that; my session’s at 4PM.) You’d think you could swipe to see later times on that ‘jump’ menu, but that doesn’t work. I can’t go farther, because the usability makes it too painful; we may miss more lessons from bad design.

Our cognitive architecture is powerful, but has limitations. Designing to work in alignment with our brains is a clear win; and this holds true for designing for learning as well as performance support. Heck, I’ve written a whole book  about how our minds work, just to support our ability to design better learning! Conflicting with our mental mechanisms is just bad design. My goal is that with more lessons in bad design, we can learn to do better. Here’s to good design!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok