Model knowledge
By David Weinberger
We understand things by seeing how the new thing is like an old thing. That means that we can't ever be radically surprised by anything, but it also means that we're capable of learning, assuming that learning has something to do with fitting things into a context. Sounds like a good trade-off to me.
One type of familiar problem arises when the match isn't as good as we think it is: Someone might think that because Hinduism is a religion, it must be like other religions when it comes to praying to a deity. Nope. Or, a high-tech business might think that its customers are like its engineers and thus would rather have minute control than ease of use ... not that such a mistake has ever actually been made.
But there's another, and odder, way of understanding via similarities: models. Models have a one-to-one relationship with some aspects of what they model, but not to all of them: If your model plane is the same size, as detailed, and made of the same materials as the original plane, you haven't built a model, you've built a plane. It's what a model doesn't share with what it models that makes it useful.
Software engineers do data modeling frequently. For example, if you're starting a business, you'll have to think about the categories of information you need to capture during a transaction and how those categories are related. Your data model begins its life as labeled boxes on a white board because of what white boards don't have that databases do: the freedom to draw wherever you want just by dragging a marker over it. The white board becomes a picture (model) of the structure of the data you'll be collecting. Very handy, as long as you don't confuse the data model with your customers themselves ... but we're all past that particular way of thinking, aren't we?
Models of knowledge are more problematic. Coming up with a taxonomy of terms that captures the corporate knowledge is a major undertaking, as anyone who's engaged in it knows. It's one thing to argue over whether it's important to capture area codes as a separate record during a customer transaction and another to try to decide how many meaningful types of faults can show up in the product quality assurance process. Is "cracking" different than "splitting"? And what about "splintering"? Are those three separate categories? Or is splintering a type of splitting? Thus do committees descend into hell.
The problem gets worse as the area of life being modeled gets wider. By the time we're up to models of consciousness, the task is, in my opinion, hopeless, but for an illustrative reason: Knowledge taxonomies model something that is non-taxonomic. Human thought doesn't operate within a "concept tree". Our categories of thought are squishy. That's why we have to spend so much time arguing over splitting vs. splintering. In building a taxonomy, we're not unearthing one that's already there, buried in our subconscious; we're trying to create one that will work for some particular purpose.
We no more organize thoughts in our brains taxonomically than we mentally file ideas in alphabetic order. And like alphabetic order, KM taxonomies are all about making information findable and visible in useful ways, not about capturing something real about how we think.
Now, of course, to make it useful, you need to observe how your organization uses terms: If all the engineers talk about the "tearing" problem with a certain product, there's probably no reason for the taxonomy to call it "rending." And if there's no difference in how a product is treated if it's split or it's splintered, then your taxonomy probably will settle on just one of those terms. But the taxonomy you end up with, no matter how useful, doesn't actually model anything. But that's OK. Encyclopedias don't model anything and they're still darn useful.
David Weinberger edits "The Journal of the Hyperlinked Organization" (hyperorg.com), e-mail self@evident.com