-->

Keep up with all of the essential KM news with a FREE subscription to KMWorld magazine. Find out more and subscribe today!

Representing the world

We’re now emerging from what is a crazy idea that’s plagued the West for centuries, if not millennia: representationalism, which says that consciousness is an inner representation of an outer reality, and therefore to know the world is to have accurate representations of it. That idea is now on the run, chased by a confluence of philosophy and technology—changing our ideas about how to know the world.

Representationalism shows up strongly in our assumptions about how communication works. We imagine one person translating an idea into the arbitrary symbols that we call language, sending those symbols out over a medium such as air or paper and having the recipient translate those symbols back into ideas. If the inner picture constructed by the second person is the same as the inner picture constructed by the first person, we say communication has succeeded. This may seem obvious, but that’s only because we’re so used to the idea. 

Challenging the concept

Yet, representationalism has been challenged by philosophers for the past hundred years or so. In the early 20th century, the Pragmatists said that truth isn’t a picture of the world so much as a tool that works in the world. In the 1950s, Ludwig Wittgenstein said that if you want to know what a word means, don’t look for what it represents. Instead, look at how we use it to do things—make a promise, ask for help, etc.—according to complex, unwritten “rules.”

Starting in the late 1920s, Martin Heidegger talked about communication not as the reproduction of an idea inside someone else’s head, but as a way in which people turn to the world, together uncovering something about it. The notion that we are each locked in our head only can emerge because we are first of all out together in a shared world. Over the past 20 years, the Extended Mind theory has taken this further, arguing that we think out in the world using physical tools. That means thinking isn’t just mental content locked in our skulls.

Machine learning enters

Since the 1960s, the postmodernists (who go by many labels, but usually not that one) find ludicrous the idea that we could “read” the world and represent it in our heads. Language shapes our experience, carrying with it all the richness and destructive baggage of social structures, culture and history. The illusion of representationalism itself has its roots in a desire to control and master the world and others. (I can promise that not a single postmodernist will find that summary even partially satisfactory.)

It can take a generation or two, but these sorts of philosophical ideas do influence our everyday “common sense.” But the rejection of representationalism is being hastened by the rise of new technology—machine learning—that is refuting some of our old common-sense ideas.

Before machine learning, programmers would model real-world systems in the rules and data they fed into computers. This is a type of representationalism. But over time we’ve run into the limitations of this approach. For example, 30 years ago, Rodney Brooks at MIT showed that you could program a simple robot to mimic how a cockroach explores its world by not trying to give it a map of its surroundings. Instead, give it just a few basic rules of movement. That opens up the thought that perhaps we humans also don’t always navigate by having an inner picture of the world we’re moving around in.

Now machine learning is truly coming into its own. Show a machine learning system thousands of examples of handwriting and it will figure out how to recognize the letter and numbers, and it will do so without what we would consider to be an internal picture of what makes a letter an A instead of a B. Instead, it will develop a probabilistic matrix expressing the distribution of grid squares in various shades of gray. The variety of machine learning called “deep learning” can do so with much more complex problems in ways that literally surpass human understanding. As we come to rely on real-time machine learning systems—autonomous car navigation and routing, for example—they will feel less like machines with inner representations than like responsive tools. 

Exciting future

This gives us a better model for generating and managing knowledge. It frees us to think of it as a way of responding in a world that is never the same twice. Static knowledge, of course, has its place, but the most exciting future of knowledge is in building systems that respond, learn and respond better the next time. 

As a result perhaps we will no longer think we experience the world as a picture in our head. Good. It’s been lonely being locked in our brain’s bony cell.

 

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues