Last night I attended a session on “Our Relationship with AI” sponsored by the Computer History Museum and the Partnership on AI. In a panel format, noted journalist John Markoff moderated Apple‘s Tom Gruber, AAAI President Subbarao Kambhampati, and IBM Distinguished Research Scientist Francesca Rossi. The overarching theme was: how are technologists, engineers, and organizations designing AI tools that enable people and devices to understand and work with each other?
It was an interesting session, with the conversation ranging from what AI is, to what it could and should be used for, and how to develop it in appropriate ways. Addresses were concerns about AI’s capability, roles, and potential misuses. Here I’m presenting just a couple of thoughts triggered, as I’ve previously riffed on IA (Intelligence Augmentation) and Ethics.
One of the questions that arose was whether AI is engineering or science. The answer, of course, is both. There’s ongoing research on how to get AI to do meaningful things, which is the science part. Here we might see AI that can learn to play video games. Applying what’s currently known to solve problems is the engineering part, like making chatbots that can answer customer service questions.
On a related note was what can AI do. Put very simply, the proposal was that AI could do what you can make a judgment on in a second. So, whether what you see is a face, or whether a claim is likely to be fraudulent. If you can provide a good (large) training set that says ‘here’s the input, and this is what the output should be’, you can train a system to do it. Or, in a well-defined domain, you can say ‘here are the logical rules for how to proceed’, and build that system.
The ability to do these tasks, was another point, is what leads to fear. “Wow, they can be better than me at this task, how soon will they be better than me on many tasks?” The important point made is that these systems can’t generalize beyond their data or rules. They can’t say: ‘oh I played this video driving game so now I can drive a car’.
Which means that the goal of artificial general intelligence, that is, a system that can learn and reason about the real world, is still an unknown distance away. It would either have to have a full set of knowledge about the world, or you’d have to have both the capacity and the experience that a human learns from (starting as a baby). Neither approach has demonstrated any approach of being close.
A side issue was that of the datasets. It turns out that datasets can have or learn implicit biases. A case study was mentioned how Asian faces triggered ‘blinking’ warnings, owing to the typical eye shape. And this was from an Asian company! Similarly, word recognition ended up biasing woman towards associations with kitchens and homes, compared to men. This raises a big issue when it comes to making decisions: could loan-offerings, fraud-detection, or other applications of machine learning inherit bias from datasets? And if so, how do we address it?
Similarly, one issue was that of trust. When do we trust an AI algorithm? One suggestions was that it would come through experience (repeatedly seeing benevolent decisions or support). Which wouldn’t be that unusual. We might also employ techniques that work with humans: authority of the providers, credentials, testimonials, etc. One of my concerns then was could that be misleading: we trust one algorithm, and then transfer that trust (inappropriately) to another? That wouldn’t be unknown in human behavior either. Do we need a whole new set of behaviors around NPCs? (Non Player Characters, a reference to game agents that are programmed, not people.)
One analogy that was raised was to the industrial age. We started replacing people with machines. Did that mean a whole bunch of people were suddenly out of work? Or did that mean new jobs emerged to be filled? Or, since we’re now doing human-type tasks, will there be less tasks overall? And if so, what do we do about it? It clearly should be a conscious decision.
It’s clear that there are business benefits to AI. The real question, and this isn’t unique to AI but happens with all technologies, is how we decide to incorporate the opportunities into our systems. So, what do you think are the issues?
Leave a Reply