Coming soon to your newsfeed —Ethics and AI
Rather suddenly, nearly every AI discussion, conference presentation, executive address, or substantial article seems to be addressing one or more of the many questions surrounding AI ethics, or the lack thereof.
It’s taken a while for most of us to get around to considering the ethical implications that might arise when machines can start making decisions on their own. This is to some extent an intentional act of obfuscation on the part of the new tech titans that use those machine decisions to control the immense new revenue streams of the industry. (Think Google, Facebook, Amazon, etc.) But it’s also a function of the preoccupations—and the noises—of the hype machine(s) in the trade press and increasingly the mainstream press.
AI hype
Over the past several years, the hype machine has offered up several successive themes for the conversation around AI. Way back, IBM kicked off this conversation by offering up “cognitive computing” and then “cognitive business”—the idea that smart computers would do more and more to help firms leverage their digital transformations for better profitability from big data. More recently, when the new tech titans started showing off their machine learning accomplishments—from revolutions in auto-translation to machine reading of medical scans to accurate facial recognition to wins over humans at GO to 100% self-driving cars, and more—the gist of the hype was that AI could pretty much miraculously solve thorny problems that humans had been struggling with forever.
Concerns about ethics
Currently, since the noisy promises of these earlier hyped claims are seemingly receding into the intermediate future, the hype machine’s focus appears to be taking a turn toward ethics. What is it exactly that will guide our AI programs to act for our greater good? What do we understand to be ethical action? Where does the need for it show up? How might we manage it? The Ethics and Governance of Artificial Intelligence Fund, backed by the Knight Foundation, Reid Hoffman, the Omidyar Network, the William and Flora Hewlett Foundation; and Jim Pallotta, founder of the Raptor Group, has seeded Harvard’s Berkman Klein Center and MIT’s Media Lab with a joint grant of $27 million to figure out these questions for us.
But in the meantime, the problem with all of these hype-machine formulations is that hype by its nature turns everything one-dimensional. Hype needs an oversimplified message. It needs stage-set depth. It needs suspension of disbelief on the part of its audience. When it comes to ethics, however, hype is running into territory that is permanently multi-dimensional, complex, highly textured, and subject to deep engagement on the part of the people impacted by AI.
Why are these ethical issues surrounding AI seemingly so thorny? Here are some questions that beg asking: What do we know about ethics? Where do we learn about it? Is there a context for ethics in our organizations?
Common beliefs
What most of us know about ethics probably includes some sense that there is a societal expectation that one should treat others in the manner that one would prefer to be treated oneself. This belief is widely held across many cultures and religious traditions. However, unless you were a philosophy major who stumbled across Aristotle’s Nicomachean Ethics in your undergraduate coursework or one of the thousands of Harvard undergrads who take Michael Sandel’s mega-hit Justice course, you probably have not been exposed to thinking with a lot of texture and detail in the ethics line. The keepers of the guidelines to “right action” by default have been the parents and imams and rabbis and monks and Dalai Lamas and priests of the world—maybe the priests not so much, it now appears.
Need for greater sensitivity
But now we need software developers, IT staffs, product managers, technology managers, innovation executives—anyone engaging in AI-related activities—to be sensitive to the many ways ethical judgments are being baked into the fabric of the projects they are undertaking. We already have chief compliance officers and chief innovation officers; is it too far-fetched to think that something akin to a chief ethics officer (another CEO) could be a helpful role as AI permeates our businesses?