Can AI be ethical?
At times in the past year, it has seemed as if the pendulum was swinging against the use of artificial intelligence. Following the world being wowed at AI-powered technologies winning Jeopardy! and beating Go and chess grandmasters, it appears the world is growing worried about the seemingly unlimited power and influence of AI in our lives. These concerns are valid and increasingly widespread, though they are often misplaced and misunderstood. Yet, it’s interesting to see governments worldwide start to publish guidelines and policies for the ethical use of AI. One could argue this was something they should have done long ago, but it’s better late than never, assuming that the policies and guidelines help assuage fears and address current and future realworld problems. And this brings us to the first problem: What is ethical AI? That is not an easy question to answer.
Though I am tempted to go down a philosophical rabbit hole, let’s say that a world in which all people (or at least all rational adults) can agree on what is right and wrong and good and bad sounds appealing. But it’s not reality; what is ethically and morally defensible in one culture or location may be quite different somewhere else. There are some universally agreed-upon ethical business practices—for example, almost everyone agrees that fraud is wrong. But when it comes to human decision making and the application or removal of biases, whether perceived or real, nothing is so clear-cut.
AI as a set of technologies or even as a general concept has no chance to make sense of global ethical and morality soups. Even a statement such as the one used in the EU’s “Ethics Guidelines for Trustworthy AI” report that states: “AI systems should both act as enablers to a democratic, flourishing and equitable society by supporting the user’s agency and foster fundamental rights, and allow for human oversight,” is open to challenge. For example, research from the Economist Intelligence Unit finds that only 23 countries, representing less than 10% of the world’s population, are fully democratic. In contrast, 57 countries, representing around 36% of the world’s population, are authoritarian. At first glance, the concept of defining parameters for ethical AI may appear logical and universal, but at second glance, it seems more a pipe dream.
That is a long way of saying that you can define ethical AI in any way that suits your beliefs, which doesn’t sound very ethical at all.
My point here is to give a high-level sketch of the complex minefield we wander into when we start thinking about the influence of AI on business and society as a whole. Terms such as “bias,” “accuracy,” and transparency” are used all the time in AI, but they are open to interpretation even here. Bias, for example, sounds as if it is a bad thing, and it can be. If your AI-powered HR system discriminates against people of color, I would personally believe that to be a terrible thing indeed. But bias isn’t always negative; it can, in fact, be positive. Biases are assumptions that can be very accurate. Take, for example, the statement, “It always rains in Seattle or Glasgow, Scotland.” What is essential is being aware of the biases, where they come from, where they lead, and what their impact is.
To put it another way, without inherent bias in the data, AI would not make decisions. Bizarre though it may seem, AI is dependent on bias being present. We can engineer to remove negative bias in data, and we should. But we should also remember that biases are patterns of behavior, and patterns are what AI works with. The truth is that AI can never be unbiased, but it can be responsible and fair.