Cognitive computing: Strategies for survival
“Many smart people are thinking about the robot-filled society of the future, and they are widely distributed on the basic question of whether we are all going to hell in a handbasket.” (Only Humans Need Apply, Tom Davenport and Julia Kirby, 2016)
Doomsayers abound these days. There are plenty of statistics that show automation taking a toll on knowledge workers’ jobs as machines get smarter. The parallels with the Industrial Revolution are striking. Should we all just give up and wait for the Singularity to take over our lives?
Refreshingly, in a new book on the subject, Davenport and Kirby beg to differ. They not only disagree with this view, they provide evidence that the future is more hopeful than not. Even better, they describe a set of strategies for both knowledge workers and their managers to position themselves for the changes that are already happening in the workplace. Only Humans Need Apply explores the minuet between computers and humans at the dawn of the cognitive computing era. What’s our role? What’s theirs? Who will be replaced? What can each partner contribute?
Davenport and Kirby offer five strategies to help knowledge workers reposition themselves for this next era. They present strategies for preparing for a new role depending on who you are, how you like to work, whether you are a risk taker, revel in or avoid change. For example: an underwriter who uses a smart machine as an assistant so that he can spend more time personally advising customers, or perhaps one who uses her experience to spot inconsistencies and subtle differences in financial situations that are not apparent to a machine. An underwriter might become the “automation boss,” deploying and managing systems. Or, instead, an underwriter may choose to develop a specialty that is so narrow that it is not worth automating. Finally, as in any field, an underwriter might become an entrepreneur, a visionary who spots a need and develops a new product.
The authors offer much more than a self-help guide to surviving the cognitive computing era, however. The book spotlights technology trends and new companies that epitomize them. It discusses automation vs. augmentation as competing strategies and shows why complete automation is self-defeating because it locks the enterprise into a solution that will soon be obsolete. It makes a convincing case for augmentation—people and machines working in tandem to complement each other.
The augmentation approach gives adopters the flexibility to adapt as processes and goals change. People provide the flexibility and the analytical skills to spot bottlenecks and improve methods; it is human nature to adapt to change. Machines amplify human strengths and can help make up for their imperfections. They handle massive amounts of information and are not blinded by their biases or embarrassed by their errors. They are consistent—sometimes to a fault. What they can’t do particularly well is think intuitively, understand the human implications of a decision, make moral judgments or bring to bear their experience of the real world.
The fact is that we are awash in data. We need strategies to handle the information overflow. The last computing era helped us manage and mine predictably structured data. That was a leap forward, but it has also confined our understanding to patterns and variables we have already identified. We need to find the unknowns—the surprises, the black swans. This is why we see growing adoption of new methods for managing, merging and mining information, including cognitive computing, big data technologies or Internet of Things.
Successful organizations have found that being able to find, analyze and use what they “know” gives them a competitive edge in treating patients, finding and assessing threats or managing their customers. The problems they are tackling are complex, the information is ambiguous and there is no single right answer. Instead there are only best answers for a given set of circumstances. These problems can’t be resolved by the data in a monthly sales report. They are uniquely human in their complexity.
Neither computer nor human is perfectly suited to solving problems that are multifaceted, ambiguous and dependent on too many variables to take in quickly. From deciding what color to paint a house to treating a life-threatening disease, the choices can be overwhelming and the answers depend on who you are, where you are and what resources you have at your disposal. Most of these problems are a mixture of information gathering, categorization, eliminating the extraneous, analyzing what’s left and then discussing in order to make a decision.
The information that fuels these decisions is largely unstructured and unpredictable in how it is formatted or expressed. IDC’s excellent report series, The Expanding Digital Universe (emc.com/leadership/digital-universe/index.htm?pid=landing-digitaluniverse-131212), looks at types of data, data growth and, most recently, which data is actually used and useful. It’s a woefully small percentage. Harnessing new data tools to tag and organize information that we already have amassed is an obvious first step to competing on information.
If we look at the role of machines in knowledge work as augmentation of human abilities instead of man vs. machine, it’s apparent that the two are complementary. Complex decisions can be helped along by having someone (or something) sift through too much information, organize it into choices and provide supporting evidence. It’s a distinct advantage to have the computer ingest and tentatively organize massive data from multiple sources. But it takes a human to see if the choices and the evidence make sense. Computers may avoid some biases or find unknown patterns. But we need humans to introduce matters of taste, circumstance, preference or personality. The human provides the understanding, the synthesis, the intuition that tells us if the recommendations make sense.