Richard Wrangham described the behavior of a young chimp, Kakama, who played with a small log as if it were a child. Kakam made a small nest for it, held it, and seemingly loved it. What is the best scientific explanation for this behavior? Do you risk anthromorphising the behavior by suggesting it is like what a child does? Or do you risk anthropodenial by assuming the chimp is just an automaton, more like a watch than a human, just a watch who happens to like logs?
The usual way to go about answering this is to appeal to parsimony.
Parsimony is the principle of valuing the simplest scientific explanation for a phenomenon as best among all explanations.
In other words, everything should be explained as simply as possible. The simpler the better.
Making things simple is what you get when you use the mental tool of Occam’s razor. William of Occam’s basic idea was to select the explanation that requires the fewest number of moving parts but still nonetheless got the job done. This has a long philosophical history. Characters like Aristotle, Newton, and Thomas Aquinas have all said at one time or another something as seemingly intuitive as “keep it simple, stupid.” Except they usually said it in Latin, which Google translate tells me is “custodiunt illud simplex, stultus.”
The take home message is don’t be a stultus.
A good example among evolutionary biologists is the use of parsimony when building evolutionary trees. Parsimony is well-defined here as the tree that requires the fewest number of mutations. It is generally considered better to hypothesize one mutation that gave rise to a lineage that share a common trait than to assume multiple mutations that gave rise to many lineages independently. For example, we generally assume that all vertebrates shared one common ancestor, not that vertebrates arose multiple times in different places.
Appeals to parsimony are often played like ‘exploding kitten’ cards that nerds pull out to end arguments in their favor.
The problem is that what is simple is not always easy to identify.
What is the parsimonious explanation in this case of Kakama the chimp? I can imagine that Kakama is pretending that the log is a small child in the same way that a human child might play with a toy doll. But maybe it is simpler to imagine that Kakama is playing with the log because playing with logs this way simply makes it feel good. The chimp does not need to pretend, but can instead be like a microwave that when it receives the appropriate inputs rotates the object in its chamber until it gets warm.
Which is simpler? Kakama the pretender? Or Kakama the fuzzy microwave?
It is not uncommon for animal biologists to claim that the fuzzy microwave is the more parsimonious explanation. But this is far from universally accepted, even among primate biologists. This is because parsimony is at best a rule of thumb and in many cases simply means different things to different people.
Bertrand Russell, in Logical Constructions, explained Occam’s razor in a way that I think helps answer the question: “Whenever possible, substitute constructions out of known entities for inferences to unknown entities.”
I am familiar with the simplicity with which a child can play with a toy, and imagine that it is something that it is not. I am not familiar with the kind of complexity it would take for a child to do this without an internal representation of the thing the toy is supposed to be. This second case requires an explanation that seems entirely unfamiliar. It would be like acting out Shakespeare without understanding the plot or even understanding the words of the English language coming out of one’s mouth.
Another way to put it might be, in the terms of of Theodore Woodward, “When you hear hoofbeats, think of horses not zebras.” Horses are more common than zebras. And young animals (like human children) imagining their toys are substitutes for other objects seem more common than automaton that behave as if they have representations but don’t actually have them.
Believing that some agents don’t think certain kinds of interesting thoughts is a special case of ‘zombie’ explanation that one often reads about in philosophy. I don’t have to be a real person with feelings and memories, I could just be an unconscious zombie that behaves as if you have feelings and memories. There’s no way for you to know.
Of course believing that everyone is a zombie is an odd thing to do. That’s because it is the less parsimonious explanation given what we know about ourselves. In the future, we might build sophisticated computers that do a good job of imitating humans, and then how we define zombie may change. But for now, we accept it as given that other humans are not zombies. We do not need to propose that animals are simpler machines without representational or symbolic awareness but with some unfamiliar machinery that allows them to behave as if they were so complex. We can simply invoke Bertrand Russell’s parsimony rule and propose the known explanation, which is they probably think a lot like we do.
There is another problem with parsimony though that escapes Russell’s familiarity clause. It is the argument from convergence. Suppose I do ten studies on chimpanzees and each one could be explained by a possibly complex but familiar theory, the theory of mind. If chimpanzees have theory of mind then they have beliefs and wants and can simulate the intentional states of others. A chimp with a theory of mind knows that if another chimp can’t see something, then it might not know about it.
But suppose also that each of these studies could be explained by a simple rule specific to each study, so that we would need ten different simple rules to explain the ten studies. Which is simpler, the ten simple rules or the single theory of mind?
Premack and Woodruff (1978), who were arguing exactly about chimpanzee mental states, argued that it was more parsimonious to accept the theory of mind as the better explanation. The alternative, they argue, would be something akin to chimpanzees internalizing the Journal of Experimental Psychology, which chimps could think of as a kind of complex rulebook for how to behave for experimental scientists.
Hayes (1998) spent a number of years arguing for simpler associationist accounts of primate behavior. Her mantra seems to be that chimps don’t have minds like ours. Think fuzzy microwave.
In response to arguments that theory of mind was a simpler explanation if you take into account all the experiments, Hayes conjured Hume to sort them out.
Hume is an exploding kitten of an entirely different sort. If someone invokes Hume, you lose. But in reality, everyone loses.
Hume is like the evil Zen ant-genius. His claim, which we now call the problem of induction, is that we can’t prove anything. He put it this way: “there is nothing in any object, consider’d in itself, which can afford us a reason for drawing a conclusion beyond it.” In other words, you can’t make inferences beyond specific cases because there is no natural law that says generalization is correct. Generalization is often wrong. To claim that all swans are white because you’ve only seen white swans is to overstate your case. You simply cannot prove that the sun will come up tomorrow no matter how many rigorous scientific studies you do. Even with our fancy Bayesian statistics, we prove nothing.
Hume’s argument is the sharpest of Occam’s razors. It cuts reality at its joints. It’s kind of a “we don’t know crap” argument.
But we do know crap, and we don’t really need parsimony to sort it out. Parsimony should not add weight to any argument until we understand exactly what’s being claimed and what’s being ignored. If someone has to invoke parsimony, then one of two things is true. Either their argument isn’t compelling, or the people they are arguing with aren’t paying attention.