-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

AI’s ways of being immoral

Article Featured Image

Looked at in an unforgiving light, everything is immoral, from giving your beloved a beautiful diamond that passed through bloodied fingers, to the Pop-Tart you just ate that was made from grain that shed poison into the earth. Even a parent’s love is marked by the preferential treatment they give their child at the expense of children at a greater distance and in greater need. So, of course machine learning (ML) is immoral too.

Moral challenges

But ML’s deepest moral challenges as a technology are unusual and possibly unique. Here are what I take as the main areas of moral concern about ML, and the degree to which each is rooted in something essential about machine learning. First, ML is a tool of large corporations. The most powerful ML can require the resources of wealthy organizations. Such organizations usually have at best mixed motivations, to be charitable about it.

This is not a problem with ML itself but with the unequal distribution of resources in the societies that are inventing it. Although that doesn’t lessen the danger of AI, it means that moral issue is not essential to the tech. Indeed, there have been large and significant machine learning models developed by nonprofit organizations, including by Open AI (admittedly no longer as open as when it began) and by universities and scientific research organizations.

Second, ML is a threat to autonomy. This is an especially potent moral weak point when combined with the first one. The large corporations mounting AI projects often train it on the massive stores of personal data they’ve accumulated about us. They routinely use the resulting ML models to manipulate us, often in ways that are not in our best interests. This too is not a problem with the technology itself, although it is a real problem, of course. Many ML projects are not based on personal data and don’t threaten our autonomy. Take, for example, weather forecasting, climate change models, medical diagnostics, and route-finding road maps.

Third, AI threatens privacy. Privacy concerns often get mixed in with concerns about autonomy because they both spring from the use of personal data, but they are distinguishable. If a company’s ML model of us is derived from personal information we might not want exposed, but the company safely protects or destroys that data, in theory the model can manipulate us—subverting our autonomy—without putting our privacy at risk.

Now, there are a number of ways in which personal data can sometimes be wrung from ML models, so there are risks that the violation of our autonomy can lead to violation of our privacy. That is a risk that responsible organizations guard against. But, risks to privacy aren’t inherent in machine learning itself. Still, the fact that machine learning can use private data to manipulate us certainly encourages companies to generate and capture that private data to begin with, which makes its unwanted disclosure possible in the first place.

Fourth, and most concerning, is the fact that machine learning can be effective even when we can’t understand how it works. The complexity of its analyses and its ability to find significance in details is both its power and its danger. Those dangers are not just moral: The complexity of ML models can make it difficult to debug them, to spot mistaken outputs, and to protect them from being subverted, by, say, carefully positioning a piece of tape on a traffic sign or altering a few pixels in an image.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues