Cognitive Computing: Balancing the risks with the rewards from AI
The advantages of artificial intelligence, and in particular of cognitive computing, have been touted or attacked for some time with more heat than light. As is often the case, we see all sorts of AI and cognitive computing uses lumped into the same bucket, making it difficult to sort out truth and its consequences from hype and scare tactics.
Longtime denizens of the digital arena know that any new tool requires a clear understanding of purpose if it is to be used effectively. Cognitive and AI applications are no different. They bring both advantages and threats with them. During the past year, the Cognitive Computing Consortium, in partnership with Babson College, has been developing a more nuanced approach to using these new technologies. During the course of this research, it has become clear that a guide to understanding types and uses of cognitive computing is needed. And during the research project, we have begun to understand not just the types of cognitive applications, but the trade-offs and risks that must be considered for each.
Trade-offs
Business owners and developers must first choose the type of use they will make of cognitive computing. Having a clear vision of who will use the application, for what purpose, in which location, on what device and how the results will be used by either the user or the business is imperative. These are new technologies. They will have both good and bad but certainly unforeseen effects on customer relations and on the business in general. Here are some of the trade-offs we’ve been looking at:
1. Accuracy vs. exploration—Are you looking for some possible answers to a definable problem or for broad exploration of unknown patterns?
2. Speed of development and of response.
3. Who is the user? You can’t train customers and have to plan for a broad range of backgrounds and uses. Internal users may have enormous expertise and be impatient with designs that are too elementary.
4. Data—Curated or anything goes? Well-verified data can improve accuracy and trustworthiness. But it won’t include the social media free-for-all that lets you understand your potential audience.
5. Purpose—To save lives, support medical or financial decisions, recommend products to customers or pilot a self-driving car?
6. Degree and type of interactivity needed—The ability to understand questions, track the turns and twists of a conversation and offer suggestions is critical to information exploration. Making recommendations requires an entirely different type of interface.
7. Risk—The impact of risk in the digital world vs. use of the technology in the physical world might be a matter of life and death vs. impact on the bottom line.
The bottom line is that you can’t have them all. If you opt for greater accuracy, you will need domain knowledge, taxonomies and built-in relationships. But if you describe a domain (accuracy), you will inevitably leave out what you don’t know about (exploration).
Risks
There are risks—intended and unintended—in making each of these design decisions, including risk to the customer, risk to the business and risk to the software vendor.
Risk to the customer
♦ Will it affect the customer physically? An application to guide the blind in an unfamiliar environment must be very sure of its ability to recognize barriers and drops. A recommendation for a medical treatment must take into effect not just a possible drug, but its side effects, availability (insurance, location, trained personnel to administer it) and the preferences of the patient.
♦ If virtual, will the outcome permanently affect the customer’s finances or more simply recommend products to purchase based on preferences, location, climate and purchasing history?
♦ Hacking.
Risk to the business
♦ Incomplete information will produce poor business decisions.
♦ Algorithmic discrimination—Built-in faulty assumptions will affect results.
♦ Data discrimination—Biased data or data that is not representative or knowledge will produce biased results.
♦ Poor interaction design may turn software into shelfware.
♦ Hacking and malicious AI for competitive, political or monetary purposes.
Risk to the software vendor
♦ Loss of face—And reputation, income and market position.
Bottom line
The fact is that the effects of AI and cognitive computing will be even broader than current traditional computing systems. As we incorporate more and more data sources for better results, we also increase the likelihood of affecting more lives and more organizations. Even simple product recommendations can pull the rug out from underneath small businesses that are not recommended. When these technologies are embedded in larger applications and in devices, they and their benefits and risks become widespread, and the users will assume they are reliable and trustworthy. Businesses must understand how their black boxes work. Customers must learn to be digitally and device-wary. Vendors must provide understandable tools for security, privacy and anonymization. Short of living on a disconnected desert island, there is no way that any vendor, business or customer can prevent every negative outcome. But we can hedge our bets by becoming knowledgeable and by not placing blind trust in brilliant technology.