I was asked about the latest techno-hype, bionic reading. At the same time, there’s a discussion happening about learning affordances of the metaverse. I realize my strategy is the same, which I learned many years ago (wish I could remember from whom!). The short version is, wait until the dust settles. Why? Let’s evaluate the late adopter strategy.
So, for anything new, there all-too-frequently seems to be a lot of flash. In my experience, a lot more than substance! That is, many things rise, and most fall. When things calm down after the initial exuberance, most simply disappear. There are myriad factors: acquisition and shut down by competitors, other elements fail despite a good premise, or even unexpected factors outside of control (e.g. a pandemic!). Of course, the usual suspect is that there’s no real there there!
I remember the hype over Second Life, and recognizing that the core elements were 3D and social. Yet, what we saw were slide presentations in a virtual world. Which was nonsensical. I’ve suggested before that you can infer the properties of new technologies, in many cases, by considering their cognitive affordances. I’ll await the meta-verse manifestation, but it seems to me to be the same, just more immersion. Still, lots of technical and cognitive overhead to make it worthwhile.
Similarly with bionic reading. There’s now lots of anecdotal suggestions that it’s better. That’s not the same, however, as a true experimental study. Individual experiences don’t always correlate with actual impact. There’re myriad reasons for this too, e.g. self-fulfilling prophecy, perception vs reality, etc. Still, I really want to have some more convergent evidence. Here it’s harder to do the affordances. Yes, it might support people who have difficulty reading, but might it interfere with others? How will we know?
On the basis of the above, however, I suggest waiting until something’s been around, and then if it persists, start investigating what the affordances might be. Many things have come and gone, and I’m glad I didn’t bite. I might then be late to a platform, but that’s OK. I still tend to get opportunities to innovate around ideas of application after they’re established, because, well, that’s what I do ;). Affordances help, as does lateral thinking and having on tap lots of mental models to spark ideas.
We’re too easily enchanted with the latest shiny object. No argument it’s worth experimenting with them, but don’t swallow the hype until you’ve either had your own data, or someone else’s. I reckon rushing in has a greater opportunity for loss than gain. Let those with needs, resources, and opportunity take the first cuts. There’s no need to bleed prematurely, there’ll be plenty of opportunities to need to tune and test again even once principles emerge. So that’s my take on the value of a ‘late adopter’ strategy. What’s yours?
Daryl says
Your thoughts on determining the right timeS for appropriate levels/extent of “cursory”, more “informational”, deeper & strategic impact/ opportunity/risk/resource investigation?
Clark says
Off the top of my head, I think cursory happens immediately, but it’s just to make sure you understand what the proposal is. So you are aware when/if people start talking about real value outcomes. If that happens in credible ways, it may be time to get information about providers, tech requirements, use cases, etc. If there seems to be a real opportunity for impact, then it’s time for some pilots: experiments that aren’t business-critical but can mimic those situations and help you evaluate. Along the way, of course, you’re tracking down the underlying research, evaluating the claims as to methodology, and more. Make sense?