After my presentation in Shanghai on AI for L&D, there were a number of conversations that ensued, and led to some reflections. I’m boiling them down here to a few rules that seem to make sense going forward.
- Don’t worry about AI overlords. At least, not yet ;). Rodney Brooks wrote a really nice article talking about why we might be fearing AI, and why we shouldn’t. In it, he cited Amara’s Law: we tend to overestimate technology in the short-term, and underestimate the impact in the long term. I think we’re in the short-term of AI, and while it’s easy to extrapolate from smart behavior in a limited domain to similar behavior in another (and sensible for humans), it turns out to be hard to get computers to do so.
- Do be concerned about how AI is being used. AI can be used for ill or good, and we should be concerned about the human impact. I realize that a focus on short-term returns might suggest replacing people when possible. And anything rote enough possibly should be replaced, since it’s a sad use of human ability. Still, there are strong reasons to consider the impact on the people being affected, not least humanitarian, but also practical. Which leads to:
- Don’t have AI without human oversight (at least in most cases). As stated above in 1, AI doesn’t generalize well. While it can be trained to work within the scope you describe, it will suffer at the boundary conditions, and any ambiguous or unique situations. It may well make a better judgment in those cases, but it also may not. In most cases, it will be best to have an external review process for all decisions being made, or at least ones at the periphery. Because:
- Your AI is only as good as it’s data set and/or it’s algorithms. Much of machine learning essentially runs on historical datasets. And historical datasets can have historical biases in them. For instance, if you were to look at building a career counselor based upon what’s been done in many examples across schools, you might find that women were being steered away from math-intensive careers. Similarly, if you’re using a mismatched algorithm (as happens often in statistics, for example), you could be biasing your results.
- Design as if AI means Augmented Intelligence, not Artificial Intelligence (perhaps an extension of 3). There are things humans do well, and things that computers do well. AI is an attempt to address the intersection, but if our goal is (as it should be) to get the best outcome, it’s likely to be a hybrid of the two. Yes, automate what can and should be automated, but first consider what the best total solution would be, and then if it’s ok to just use the AI do so. But don’t assume so.
- AI on top of a bad system is a bad system. This is, perhaps, a corollary to 4, but it goes further. So, for instance, if you create a really intriguing simulated avatar for practicing soft skills, but you’re still not really providing a good model to guide performance, and good examples, you’re either requiring considerable more practice or risking an inappropriate emergent model. AI is not a panacea, but instead a tool in designing solutions (see 5). If the rest of the system has flaws, so will the resulting solution.
This is by no means a full set, nor a completely independent one. But it does reflect some principles that emerged from my interactions around some applications and discussions with people. I welcome your extensions, amendments, or even contrary views!
Leave a Reply