-->

Keep up with all of the essential KM news with a FREE subscription to KMWorld magazine. Find out more and subscribe today!

Can AI be ethical?

Article Featured Image

Coming up with legislation, guidelines, and policy to enforce the ethical use of AI is always going to be an uphill struggle. As a result, the current crop of government guidance documents read well but are understandably vague on detail. And that, in my estimation, is a good thing. AI is a vast and ambiguous topic in and of itself, and it needs to be dealt with in broad terms, focusing only on measurable and critical pain points.

Possibly the best example of this recently came from the Scottish Government in its “Scotland’s AI Strategy” report released in March 2021, which included the following statements:

“AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being.”

“AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity, and they should include appropriate safeguards— for example, enabling human intervention where necessary—to ensure a fair and just society.”

“There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.”

“AI systems must function in a robust, secure, and safe way throughout their life cycles and potential risks should be continually assessed and managed.”

“Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.”

I happen to love these guidance statements and am wholeheartedly in support of them. However, in practice, following and enforcement of these principles are going to be massively challenging. The trend in AI today is to build black boxes and embrace deep learning. Though there are performance and practical advantages, providing transparency is not part of the equation. For example, though I am no lawyer, businesses and government departments in Scotland will have to be very wary of using deep learning if there is the possibility that the decisions made by the system could be challenged. But maybe even more importantly, holding organizations and individuals accountable for the AI they use and work with will have profound implications. In practice, this means one cannot claim ignorance. You cannot blame the computer (AI); it is you who carries the can. This approach closes a loophole that I feel many in the tech field have hidden behind for too long, meaning that you can no longer claim that it’s too complex or unexplainable. Or, if you do, then you are accepting responsibility for creating such an unexplainable and challenging situation in the first place.

To my mind, recommendations such as these are powerful and, if enforced,will ensure that organizations are more thoughtful and “ethical” in their use of AI. But recommendations are simply a starting point for a much more extensive discussion, one many in the technology sector are unwilling to have. Ethical AI is something that everyone can agree is a good thing; the only problem is that nobody can agree on how to get there, what it will look like, and, quite frankly, if it’s even worth the effort. That last statement may seem harsh, but technology developers and sellers have seldom given much thought to such issues; their stance has been that these are tools, they advance society, and it is up to the buyers to make use of them—in other words, saying, “It is not our problem.” But AI is not similar to previous technologies; it unleashes a Pandora’s box of ethical, moral, and legal issues, few of which get considered in advance of the systems going live.

Ethical AI is an aspiration. AI itself has no moral or ethical compass; it is a blank and soulless canvas. To put it another way, at heart, AI is just a bunch of tools that predict probabilities. AI cannot be ethical or unethical. Rather, it is used ethically or unethically, which is an important distinction. We are responsible for what our AI systems do—their decisions, and their underlying biases—and it’s our responsibility to ensure that what they do is within the parameters of ethical and legal guidelines. If we can’t do that, then we should question if they should be used at all.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues