Ethical issues in AI and cognitive computing
Some innovations spring upon the world with little apparent warning. Most, however, are incremental changes that have been many years in the making—an evolution that builds upon prior versions that had little immediate impact. It is not merely the most recent release, but the convergence of market demand, technology readiness, and new invention that is necessary for the world to take notice. The sewing machine, the combustion engine, or the automobile needed the insight of entrepreneurs as well as technologists to change the world.
That’s also the case with machine learning and AI. Inklings of their components have been around for decades. Suddenly they are headlines. The interesting thing about AI, though, is that many of the headlines today are less about the technical advances and more about the impact the invention will have on people and on society.
Questions of privacy, bullying, meddling with elections, and hacking of corporate and public systems abound. We demand technology solutions, but perhaps no clear solutions are possible. More abstract still are the effects that technology has on children, adults, and society at large. In a world where ethics and societal norms differ from one culture to another, can we come up with generally accepted societal norms for good behavior?
In May, we invited Tom Wilde of Indico, Steve Cohen of Basis Technology, and David Bayer of the Cognitive Computing Consortium to tackle the topic, “Beyond do no harm—The complex considerations of ethical AI.” This lively panel discussion was held at DBTA’s Data Summit conference in Boston. The panelists all had a technology background, yet the issues they discussed were mainly legal and societal. This reflects the preoccupation the press has with problems such as social networks presenting quandaries that may be insoluble by technology.
The broad range of topics the flowing panel discussion covered included:
Trust
The issue of trust—how much to trust the recommendations and predictions of “black box” AI and cognitive computing systems is central to the issue of AI ethics because it raises the question of expectations. After years of watching both the software industry and buyers of software, we are convinced that vendors’ and buyers’ expectations of software perfection—or the need for perfection—simply don’t match. The archives of computation journals are rife with discussions about how to develop software without bugs. Complaints by users are similarly common. How do we get these divergent expectations in sync? To avert widespread frustration and potential lawsuits, we need to have both sides develop a common approach.
The issue of trust touches both software use and development. Can we understand where and why recommendations are made by AI systems without being able to audit and validate algorithmic results, or test repeatability? Can we trust the content on which recommendations are made? Are they based on sources that are authoritative and relatively unbiased?
Should we try to understand the motivation for recommendations? Is the vendor’s profit motive well-aligned with what we are looking for? Often, a search for products is influenced more by the retailer’s inventory than the searcher’s needs. Recommendation engines can easily prioritize profit over utility, value or truth. That’s true, of course, in the physical world as well, but users often trust what comes out of a website somewhat blindly.
Bias
Bias is another pervasive concern. Training sets can easily contain misinformation, old information, or incomplete information that skews the results of a search. They can reflect bias in data and source selection. The content administrator may be unaware that other data exists outside the organization’s control. There may be hidden assumptions in taxonomies, ontologies, or schemas. As a result, algorithmic discrimination based on a rigid schema may make it difficult to present the “best” information for an individual’s needs. The vastness of large collections can make it difficult to “stumble” on more pertinent results.