-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

Signs, causes and machine learning

When a culture looks at bird entrails to predict the fate of a king, we laugh: Bird guts have no causal relation with the king’s life. But these cultures are not looking for causal relationships. For them, and for much of our own culture’s history, the universe is not a clockwork of causes but a web of meaning.

For example, we used to assume that plants that look like parts of the human body can cure diseases of those parts. We know that that’s wrong. But it is not as without sense as it at first sounds. In his book The Order of Things, the philosopher Michel Foucault quotes the 15th century medical genius Paracelsus:

It is not God’s will that what he creates for man’s benefit and what he has given us should remain hidden . . . And even though he has hidden certain things, he has allowed nothing to remain without exterior and visible signs in the form of special marks—just as a man who has buried a hoard of treasure marks the spot that he may find it again.

And here’s the twist ending: Machine learning is bringing us back to relying on signs over causes.

Nicki Minaj or Hello Kitty?

Whatever its ethical shortcomings were, Cambridge Analytica’s promise was not ridiculous: By analyzing Facebook data, machine learning can predict which ads would best work on different users clustered by personality type. That analysis need not focus on, or even consider, overtly political information from Facebook. For example, in 2013, two psychologists at Cambridge University gave 58,000 volunteers a personality test and then correlated those psychological profiles with what the volunteers Liked on Facebook. It turned out that being extroverted correlated strongly with Liking Nicki Minaj, while openness correlated with Hello Kitty https://perma.cc/8LR5-LCLG.

We can perhaps make up stories about why that’s so, but we can also imagine correlations that defy such attempts at explanation. For example, Cambridge Analytica may well have had access to more than what people Liked on Facebook. Applying machine learning to all that data might reveal—hypothetically—that writing long posts on weekdays, responding quickly to posts by people whose page one infrequently visits and using the word “etc.” a lot all correlate with being shy. Maybe posting photos that often show a city skyline in the background and double-clicking on buttons that only need a single click together correlate with liking cats over dogs and supporting the gold standard. Or whatever.

Signs, not causes

These examples may be made up, but this is where we are headed. Machine learning systems can look at data without instructions about how we think the pieces go together. The AI finds correlations and assembles them into webs of connection. Clicking the Like button for Nicki Minaj might make it much more likely that you’re an extrovert, but a tiny bit more likely that you over-tip. Put those correlations into a web in which another hundred data points each make it slightly more likely that you’re an over-tipper, and the system might make a probabilistic prediction that you’re 86 percent likely to tip your Starbucks server two dollars when 50 cents would be enough and zero would be acceptable.

We know these systems work. That’s why we use them. “Work” here means that if the system says there’s a 72 percent chance that you are an over-tipper, it’ll be right about 72 percent of the time. But the AI is here working as a system of signs and only accidentally as a system of causes. There is, as far as we know, no causal relationship between having an open personality and liking Hello Kitty. There is no causal relationship between double-clicking on buttons, preferring cats and over-tipping. Those turn out to be signs of a tendency to over-tip, not causes.

No need to know

Certainly, these signs may all spring from whatever causes over-tipping as a trait. Maybe they are all expressions of a need to be liked, of a fear of embarrassment or a sense of compassion. If the correlations are statistically valid, there is presumably some reason why they are. But the causes may be manifold and subtle. We may never find them. Nor do we need to so long as the machines are giving us accurate enough results.

This is clearly not the same as the ancient system of signs that was designed by God or that was an expression of the fundamental beauty of the universe. Our new system of signs is so chaotic that humans often simply cannot understand it; a machine learning system can have tens of thousands of variables and hundreds of millions of densely connected data points. If it were orderly and beautiful, we wouldn’t need powerful computers to see the system of signs and to make inferences from it.

We have gained something much closer to the probabilistic truth, without the simple order and beauty of the old system.

Sometimes the truth hurts. 

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues