-->

NEXT WEEK! KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

The Dethroning of Deduction

Article Featured Image

Perhaps you’ve heard this one before:

All people are mortal. Socrates is a person. Therefore: Socrates is mortal.

This is the classic example of the West’s preferred form of reasoning— deductive logic. We have long professed to prefer it because of the certainty of its conclusions: If you’ re certain about the first two premises, known as the major and the minor premises, then you can be certain that Socrates is going to be pining for the fjords eventually. (If you’re below the age of Monty Python, the fjords reference is a stand-in for “dead as a doornail.”)

Of course, you can validly deduce from premises that you’re not certain about, but the deduction will be only as certain as those premises. It’s hard to find premises that are completely certain outside of axiomatic systems such as logic or math, or games like chess, with stipulated rules that let you deduce with certainty that you are not allowed to hop your pawn over another pawn. You could change the rules when playing with your child so that hopping pawns are allowed, which the child delightedly then renames “frogs.” At some point, though, the child is likely to find out that they haven’t been playing real chess.

Inductive Reasoning

Induction is the leading alternative to—and partner of—deduction. It lets us infer results based not on rules but on prior evidence. The classical example is our inductive inference that the sun will rise tomorrow as it has every day of our existence. It’s a safe inference, but also one that could literally blow up at any time. Others are not so safe: Your experience shows that cars stop at stop signs, but if you think it’s therefore safe to go ahead without checking for cars at an intersection, you may regret it.

Inductive inferences are often worth relying on because life on Earth generally follows patterns, from cars stopping at stop signs, to it often raining when it’s cloudy. In fact, many instances that are strictly governed by known rules and principles are better predicted through induction than deduction. That’s often the case when there are many elements interacting. For example, why did a small sheet of rain fall off a leaf exactly as you were passing under it? We believe there are universally true major premises—physical laws—that explain it, but there are so many interacting particulars that we can’t deduce it. Inductive logic would likely give you a more reliable but probabilistic estimate of whether walking under a tree is going to protect you from the rain.

Machine Learning

All of this raises the urgent question: When will I get to my actual point, which is about machine learning? Answer: Now.

Machine learning distinctively relies on induction—analyzing vast amounts of data to find patterns and make predictions. It bypasses the rigidity of deduction, thriving in scenarios with incomplete or uncertain premises, such as weather forecasting or traffic prediction. It offers us probabilistic insights without promising absolute certainty. This frees machine learning to make nuanced predictions in complex, real-world scenarios where deductive reasoning would struggle due to incomplete premises being applied to unpredictable conditions. (It would be nice if AI always gave users its assessment of the reliability of its predictions, not only to help us decide what to make of a machine learning response, but also to remind us of the uncertainty with which all must live.)

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues