I had the opportunity to attend a special event pondering the ethical issues that surround Artificial Intelligence (AI). Hosted by the Institute for the Future, we gathered in groups beforehand to generate questions that were used in a subsequent session. Vint Cerf, co-developer of the TCP/IP protocol that enabled the internet, currently at Google, responded to the questions. Quite the heady experience!
The questions were quite varied. Our group looked at Values and Responsibilities. I asked whether that was for the developers or the AI itself. Our conclusion was that it had to be the developers first. We also considered what else has been done in technology ethics (e.g. diseases, nuclear weapons), and what is unique to AI. A respondent mentioned an EU initiative to register all internet AIs; I didn’t have the chance to ask about policing and consequences. Those strike me as concomitant issues!
One of the unique areas was ‘agency’, the ability for AI to act. This led to a discussion for a need to have oversight on AI decisions. However, I suggested that humans, if the AI was mostly right, would fatigue. So we pondered: could an AI monitor another AI? I also thought that there’s evidence that consciousness is emergent, and so we’d need to keep the AIs from communicating. It was pointed out that the genie is already out of the bottle, with chatbots online. Vint suggests that our brain is layered pattern-matchers, so maybe consciousness is just the topmost layer.
One recourse is transparency, but it needs to be rigorous. Blockchain’s distributed transparency could be a model. Of course, one of the problems is that we can’t even explain our own cognition in all instances (we make stories that don’t always correlate with the evidence of what we do). And with machine learning, we may be making stories about what the system is using to analyze behaviors and make decisions, but it may not correlate.
Similarly, machine learning is very dependent on the training set. If we don’t pick the right inputs, we might miss some factors that would be important to incorporate in making answers. Even if we have the right inputs, but don’t have a good training set of good and bad outcomes, we get biased decisions. It’s been said that what people are good at is crossing the silos, whereas the machines tend to be good in narrow domains. This is another argument for oversight.
The notion of agency also brought up the issue of decisions. Vint inquired why we were so lazy in making decisions. He argued that we’re making systems we no longer understand! I didn’t get the chance to answer that decision-making is cognitively taxing. As a consequence, we often work to avoid it. Moreover, some of us are interested in X, so are willing to invest the effort to learn it, while others are interested in Y. So it may not be reasonable to expect everyone to invest in every decision. Also, our lives get more complex; when I grew up, you just had phone and TV, now you need to worry about internet, and cable, and mobile carriers, and smart homes, and… So it’s not hard to see why we want to abrogate responsibility when we can! But when can we, and when do we need to be careful?
Of course, one of the issues is about AI taking jobs. Cerf stated that nnovation takes jobs, and generates jobs as well. However, the problem is that those who lose the jobs aren’t necessarily capable of taking the new ones. Which brought up an increasing need for learning to learn, as the key ability for people. Which I support, of course.
The overall problem is that there isn’t a central agreement on what ethics a system should embody, if we could do it. We currently have different cultures with different values. Could we find agreement when some might have different view of what, say, acceptable surveillance would be? Is there some core set of values that are required for a society to ‘get along’? However, that might vary by society.
At the end, there were two takeaways. For one, the question is whether AI can helps us help ourselves! And the recommendation is that we should continue to reflect and share our thoughts. This is my contribution.
William Ryan says
I believe AI (and the IoT’s) can help us help ourselves by gathering the data to bring us the trends and analytics sooner so we can make better, more informed decisions. Key to this is the need “we” have to define what the outcome being sought are (what does success look like?) and how will this outcome be known (how will we measure success?). AI can help us measure what matters but we have to define what matters first. Sounds like an amazing experience, IFTF is amazing group.