-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

Navigating the risks and challenges of AI (quickly): Create an AI governance program

Article Featured Image

Yet another concern is that AI systems and the AI-assisted applications using them are susceptible to biases, risking that they may lead to unethical actions. For example, if an HR application looking for candidates “teaches” an AI system to look for job candidates based on historical hiring profiles that do not reflect a company’s diversity goals, it may have an unintended bias. In this instance, if the system is fed predominantly white males as examples to be used as the basis of “ideal” employees, the AI system may inadvertently only recommend white male candidates.

Finally, “näive” AI systems want to please and can sometimes generate false information. Recently, attorneys were sanctioned when they used AI to create a brief, and the AI system included in the brief fake legal cases, which were submitted to the court. AI can also create unsafe information. In another case, an eating disorder website added a chatbot to answer questions, only to find out later that the chatbot was suggesting to website visitors—some of whom may have been suffering from an eating disorder—that they should cut their daily intake by 500 to 1000 calories.

Despite these risks and concerns, IT, legal, and KM departments will face tremendous pressure in 2024 to deploy AI applications. Organizations may lose an advantage sitting on the sidelines. Waiting until the compliance and risk environment becomes better understood will not be an option for many.

See Figure 2: Key steps in developing and launching an AI governance program

Four steps in launching an AI governance program

The choice of launching an AI-assisted application quickly without addressing risks versus slowing down or halting deployment until the concerns are addressed is a false dilemma. Through an AI governance program, companies today are successfully using AI compliantly, thereby limiting legal risks, ethically, and correctly. Furthermore, a well-designed AI governance program hastens deployment. There are four steps:

Step 1: Engage Stakeholders and Conduct Assessments

While it may be tempting to try to develop a program with a small group of stakeholders, this may slow down or even halt program development. As a first step, needs should be assessed and socialized with a larger group of stakeholders early in the process. Tasks include assessing current and target capabilities, conducting impact assessments, and developing an AI governance committee with roles and responsibilities.

Step 2: Develop and Update Policies—

AI requires creating both new policies and updating existing ones. Organizations need an AI governance policy that details how AI should be used, safeguards employees, and ensures compliance with regulatory requirements. Next, organizations may need to update their data retention policies or records retention schedules to avoid older legacy data with sensitive or incorrect information that “pollutes” the development of AI systems. They may also need to update data security classification and privacy policies.

Step 3: Develop AI Governance Processes

Once the policies have been developed, AI requires the development of governance processes, including regulatory review processes, data provenance process, and sensitive information reviews. These processes will come into play during development and during ongoing deployment. Correctness and accuracy need to be tested both throughout development and on an ongoing basis. Additionally, AI also needs to be tested for safety to ensure it is not being misused.

Step 4: Process Execution, Monitoring, and Remediation—

Once launched, AI-assisted applications need to be monitored on an ongoing basis. AI processes need to be executed and then monitored. Any issues, discrepancies, or problems should be noted, along with steps taken to remediate these issues.

A strong AI governance program is essential to ensuring compliance and reducing risk. An equally important benefit is that by developing the governance program at the same time the AI application is being developed, issues can be identified early, thus avoiding system redesign or rework on the tail end. Likewise, good governance engages key stakeholders early on, allowing them to both raise concerns as well as become comfortable with chosen approaches. This new, complex technology faces a chaotic legal and regulatory environment. A smart AI governance program allows companies to embrace this technology and profit from it.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues