-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

The search for explanations

That our predictions and explanations are so different tells us something about how we think the future works.

For example, my friend Tim (or so let’s call him) was 46 when he gave in to his wife’s counsel and went for an overdue physical exam. Fortunately, he was told that although he could stand to lose a little weight, all systems were in good shape. Three months later, he had a serious heart attack, from which he has now recovered because his cardiologist looked at the same data and prescribed medicine to be taken every day, changes to his diet and an exercise regime.

There was no incompetence anywhere in this process. In fact, there was only excellent healthcare. It’s just that reading backward is so very different from projecting forward, even with the same pieces of the puzzle in front of us. In a fully determinate universe, that shouldn’t be the case. Or so Pierre-Simon Laplace suggested in 1814 when he postulated that if an omniscient demon knew everything about the universe at any one moment, it would be able to predict everything that will happen and everything that has happened. Laplace’s demon’s explanations are exactly the same as his predictions, except the predictions look forward and the explanations look back.

The causal path

If the universe is the way Laplace described it, then predictions and explanations are different for us humans only because we don’t know as much as that omniscient demon. (Laplace was an atheist, which is perhaps why he talks about a demon instead of God.) But there was only one difference between the prediction and the explanation of Tim’s heart attack: a heart attack happened. The causal path looking forward and backward is the same, but once we know that Tim’s heart failed, we can see the path. We can reconstruct it.

Or at least we think we can. It should concern us that we find reasons for just about everything, from how we picked up a virus, to why the car ahead of us sat at the light, to why Putin poses with his shirt off. It’s true that we have categories for “accidents” and “Acts of God” where we don’t even try to come up with reasons, but generally if something happens, we envision the path that led up to it. We are a species that explains things.

But-for-x

We can do this because usually an explanation points only to what’s called the sine qua non cause, or the “but-for-x” cause, as in “But for that nail, we wouldn’t have gotten a flat tire.” That’s a good explanation. But there are many other “but-fors” that apply to that situation: But for the fact that we were driving a car, but for tires being made out of a material softer than iron, but for pointy objects being able to penetrate materials as stiff as tires, but for our having been born after pneumatic tires were invented, but for extraterrestrials not using space magnets to pull all iron objects off the surface of the earth, etc.

The sine qua non form ofexplanation has such deep roots in our thinking because of the social role of explanations. We generally want explanations for events that vary from the norm: Why did we get a flat? Why did I catch a virus? Why did the guy in the car ahead of me sit through an entire green light even though I honked? For each of these special cases we find the “but-for-x” explanation that points to what was special in each case. Outside of science that looks for explanations for normal things—the sun rising, magnets attracting—we usually want explanations for things that violate our expectations and thus look for the exceptional fact.

But that means that explanations aren’t doing exactly what we usually think they’re doing. They are not explaining how the world works. In fact, by focusing on what’s unusual, explanations mask the enormous richness and complexity of what’s usual.

Then there’s the unsettling truth that machine learning is putting before our reluctant eyes: There may be no dominant unusual fact that brings about an event and that can serve as a useful explanation. A machine learning diagnostic system might have learned that a particular constellation of thousands of variables means there’s a 73 percent chance that you will have a heart attack within the next five years. But changing any of those thousands of variables may only affect the percentage probability minutely; there may not be a dominant “but-for” cause.

This makes explanations far more like how we think about our own lives when we pause to marvel at how we got wherever we are. There are too many “ifs” to count. If Dad hadn’t been so supportive or so angry. If you hadn’t had to take that course because the one you wanted was already filled. If you had looked right instead of left when stepping off that curb. If you had ordered the quinoa instead of the eggplant. If you hadn’t stepped into that one gin joint in all the towns in all the world. We got to here—wherever we are—because of innumerable things that happened and a larger number of things that did not. We got here because of everything.

It takes moments like that to remind us of what explanations hide from us.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues