The story of the blind men and the elephant is as useful as it is widespread. It features in Jain, Buddhist, Sufi and Hindu lore and has also been applied in modern physics and biology. In case you haven’t heard it before, the essence of the story is that a few blind men each touch a different part of an elephant (like the tail or ear or leg or tusk) and each conclude that the elephant is like the part that they see (brush, fan, pot, or spear). Depending on the version of the story, they either cooperate to understand the elephant as a whole or get into a fight because each clings to his own perception to the exclusion of all others.
Thus the story of the blind men and the elephant has a happy resolution, or at least it can have a happy resolution, because the different perspectives can all be resolved. The function of the blindness in the story (sometimes the men aren’t blind but are in a dark room) is to temporarily hobble their perspective. Because of their inability to see, they have to get very close to the elephant, and this proximity robs them of the chance to step back and adopt a wider perspective. Since what we see depends on our perspective, they are ultimately limited not by lack of site, but by lack of perspective.
The perspective needed to see the whole truth of the elephant, however, is a perspective that’s readily available. Rumi, a Persian poet and teacher of Sufism from the 13th century, ends his version with the observation that: “If each had a candle and they went in together the differences would disappear.” In that case bringing a light literally creates the ability to step back and look at the elephant as a whole. But even with blind men, the possibility is there to aggregate the individual observations and metaphorically position themselves at the correct vantage point from which the entire truth comes into frame.
But what if some vantage points are not so readily accessible? I think that a lot of life’s most important truths fit into this category. Sometimes the truths we seek are not merely about collecting enough facts. Sometimes it’s a question of knowing which facts are more important or how the available facts relate to each other. Sometimes the crucial facts are outside our ability to comprehend. (Consider another famous example: how does a fish know what “wet” means?)
One of the reasons I studied economics in graduate school was because I wanted to know, definitively, answers to some of the contentious political issues of our day. And it turns out that on some issues, knowing economics really does provide a fairly unambiguous (albeit politically intractable) solution. But on many, many other issues there is no consensus. The arguments just get increasingly complex, nuanced, and specialized until you reach a point (or at least, I know that I did) where you realize that there will always be people with more expertise than you who disagree with each other.
What then? What happens when you’re one of the 5 blind guys touching the elephant, and just stepping back (literally or metaphysically) isn’t an option?
My hope, at least, is that there’s a sense in which a community can be be bigger than the sum of the parts. This isn’t about the wisdom of the crowd. (That’s a scenario where aggregation causes unbiased error to cancel out.) It’s the idea that a community can behave like a complex system and manifest emergent behavior. It’s the idea that maybe if I’m true to the world as I see it and my friend is true to the world as she sees it and both of us are contributing to the same community, that there’s a sense in which the community will be able to integrate the perspectives even if we cannot do so individually.
I’m not certain that this does happen, but I like the idea for two reasons.
- It’s comforting.
- It’s beneficial.
It’s beneficial, in a sense, because it’s comforting. If my friend and I both have this belief, it creates a framework for genuine tolerance and plurality. Rather than an either/or contest for victory, it’s possible to perceive divergent views as being mutually cooperative elements within a greater process. Of course in practice formal institutions need to enact one policy or another that creates real winners and losers at least in the short run. But over the longer run we don’t merely choose from the set of available policies; we envision new solutions. And that, I think, is where disagreement can be fruitful and a creative dialectic is possible.
But is it true? Because nothing in this post so far has had a whiff of simplistic subjectivity. I’m not asking if we’re all right “for ourselves.” I’m presuming that objectively reality does, in fact, exist and asking whether we’re right with respect to it. So: Does society really work like this, or is it just a beneficial lie or happy fairy tale?
I suspect it is a self-fulfilling prophecy. It belongs to that class of beliefs whereby reality is determined directly by faith. I think that a society of people who believed in this process would be a society where this process really existed. It is the potential for unity without unanimity, and as such I believe it is one of the stepping stones towards Zion.
Awesome images, Nathaniel!
I would like to think that what you describe is possible. I think it does happen with some things, at least. If I think that what the world needs is a new style of barbecue sauce, and Jane thinks that what the world needs is a new way to sell tickets online, we may both be right, and we don’t have to agree with each other for the community of which we are part to address both needs. The unity here, of course, is a bit attenuated, since neither of us has to endorse, or even be aware of, the other’s project. Perhaps what is needed is that we accept that there are lots of different economic ventures out there and agree not to interfere with one another’s ventures, so long as they stay within certain bounds of fair play.
Similarly, if I see that Sister Smith needs her leaves raked, and you see that Brother Jones needs a friend, the church (through our work) can take care of both of them without anyone in the church being aware of both.
I’m not sure either of these examples really addresses your question, though, because in neither case are we really all that tempted to make exclusive, conflicting claims, like in the case of the blind men and the elephant. I’d be interested to see you work through another scenario.
Thanks, Ben. Wikipedia is a great resource! :-)
What I had in mind is primarily politics. There’s a little saying I heard when I was a kid about how the United States is like an eagle: it needs both a left- and a right-wing to fly. I thought that was profound when I was 8, stupid when I was 16, and now I’m coming around to thinking that maybe there’s actually something to it after all.
One possibility is that the philosophical conflict between left and right (just to use an example) can, when the conflict is kept within certain bounds and when carried out in good faith, lead to the creation of new policy ideas that wouldn’t be considered if it weren’t for the debate taking place.
Another possibility is that the victories of left and right over individual policy battles may create a mixture in time and across issues that is better than the sum of the wholes, and perhaps better than if either ideology actually had free reign. I tend to think that in complex system like a human society, no one person, ideology, or policy has the right answer, so this kind of process of picking the right mixture of policies seems plausible, at least.
“Maybe We’re All Right,” or maybe we are NOT all right? What do the scriptures say?
This complements David Tayman’s post over at Worlds Without End: http://www.withoutend.org/prophets-elephants-truth-charity/
I like the “emergent behavior” model. It reminds me of times when I’ve tried really hard to do my best work, and then shown it to an editor or a co-worker whose comments made it better.
I ask myself the converse, one version of which is “there exists a human mind that can ‘fully comprehend’ or ‘contain’ God”. And quickly determine that I don’t believe it, that such a statement (seems to me) misrepresents the nature of and relationship between mortal man and God. If no one human mind can fully comprehend God, then emergent behavior becomes a strong candidate.
Note that “fully comprehend” is far more demanding than having “a correct idea of God’s character, perfections, and attributes” (Lectures on Faith). Also, note that even assuming emergent behavior is right, that does not mean and does not require that we are ALL right. There’s plenty of room in that model for some of us to be completely wrong.
NG: It’s the idea that maybe if I’m true to the world as I see it and my friend is true to the world as she sees it and both of us are contributing to the same community, that there’s a sense in which the community will be able to integrate the perspectives even if we cannot do so individually.
So, Nathaniel if my idea is that one should not be true to one’s beliefs then between the two of us, we’ve got it covered.
This seems like a way for you to keep your favorite moral idea, effort, but graft it on to a diversity of beliefs and call it good.
You keep your idea: effort, you add everyone else’s idea PLUS effort(your idea) and you hope for progress. Nice try, but it still doesn’t tell us why good faith in the service of a bad idea is better than no faith in a false idea.
If we are fallen creatures (and by golly there are certainly some serious fallen people among us and at least some of these are both charismatic and influential) then it seems like we should be putting a lot less faith in our own ideas.
If one is aiming at comfort and benefits, how does this beat just assuming that one is always right and God will let everyone else know in the afterlife?
Nathaniel,
Let’s take the case of loved one that joins a death cult, a group that is convinced suicide is what they need to do for salvation.
There are several possible sets of beliefs to respond to this situation.
1. You disagree with them and appeal to your belief system as evidence (God doesn’t condone suicide, suicide is immoral because if we made it a global principle the species would end and that’s bad, you’re just depressed or being lied to, whatever.)
These would all seem to violate your theme here (at least to the extent they attack the faith of the cult member without convincing them they are wrong).
On the other hand, you could believe that conviction is important and say “best of luck with your pascal’s wager” and see you in heaven, I’ll get you a nice headstone.
Or you could say, “you know you might be right, but its hard to say for sure, why don’t you just flagellate yourself or something and see if that will work, after all, nobody’s perfect.”
It seems to me choosing on a pragmatic basis between these 3, call them
1. I know what’s right and the others need to be convinced,
2. We all have our own beliefs, but if we work hard we’ll all be better off and
3. People believe all kinds of useless stuff so we’re better off not trying too hard or taking anything too seriously lest someone get hurt.
It seems largely a matter of faith in both God and people which one of these one believes and also seems to depend on what type of errors one most wishes too avoid.
This is a long way of agreeing with your principle but suggesting that your idea may be more extensive than you think.
Old man says:“Maybe We’re All Right,” or maybe we are NOT all right? What do the scriptures say?
He may be right but I can’t tell what they say about motes because of this giant, frickin’ beam in my eye. :)
Okay, I think that does happen, Nathaniel, and I think it was actually one of the key ideas in the early American conception of federalism. Have you read J.S. Mill, “On Liberty”? Chapter 2 is particularly relevant.
Here is one of the more memorable passages:
“In politics, again, it is almost a commonplace, that a party of order or stability, and a party of progress or reform, are both necessary elements of a healthy state of political life; until the one or the other shall have so enlarged its mental grasp as to be a party equally of order and of progress, knowing and distinguishing what is fit to be preserved from what ought to be swept away. Each of these modes of thinking derives its utility from the deficiencies of the other; but it is in a great measure the opposition of the other that keeps each within the limits of reason and sanity.”
The hitch is that Mill’s conclusion isn’t exactly that we are all right; rather, we are all partly right and partly wrong. Indeed, maybe we are all mostly wrong, but if so, then when we disagree with each other we are likely to have more of the truth between us than we would if we agreed.
Of course, in any given case, it is anyone’s guess whether what emerges from a group of varying opinions is smarter than the individuals would come up with, or even dumber and more unhinged. I think in practice we see a lot of both.
Your vision of Zion contradicts one of its own basic tenets, in that it requires everyone to believe the same thing (in the emergent process you describe). From a game-theoretic point of view, you seem to be arguing some sort of large-scale prisoner’s dilemma, but the optimal solution there requires a few sophisticated ideas to understand and not everyone does or perhaps even can considering the initial conditions they started from.
Just as a general reply, since this has come up in multiple comments:
If you ask 1,000 people to guess how many marbles are in a jar and then take the average of those guesses, you will probably get a very, very good estimate. In that sense, they are all right (together), even though (individually) their guesses are probably almost all wrong.
That is the sense in which my title (“Maybe We’re All Right”) should be taken.
Brian-
I understand where you’re coming from, but I believe your analysis is flawed. Of course Zion requires unity, that is definitional, but the question is What kind of unity?
My suggestion is that we move from a superficial notion (e.g. everyone has the same conclusions) to a deeper, more robust notion (e.g. everyone is engaged in the same process of coming to conclusions) that can tolerate superficial disagreement.
I think the game theoretic point of view is critical, and you’re right that the Prisoner’s Dilemma is the key game, but I think you’re missing two vitally important facts. (Maybe just one.) First (perhaps you know this already), the Prisoner’s Dilemma can be “solved” if it is an iterated PD with an unknown number of iterations. Which is a fairly good model for human interaction in mortality. Second (and this is the vital point), players do not need any sophisticated ideas to solve the game.
You can use Nash equilibrium concepts to describe all kinds of real-world phenomena when the people playing the “game” (in the wild, as it were) have never heard of Nash equilibrium and have no conception that they are engaged in activity that could be described by game theory.
What is required, in practice, is something like the ability to learn (which we have) and some concept of the right strategy. One could look at the central teachings of Christianity (and the Atonement itself) as a giant, big, flashing neon sign that says (in game theory terminology) simply: “cooperate”.
Nate, your response in 11 is contradicted by your text: “This isn’t about the wisdom of the crowd. (That’s a scenario where aggregation causes unbiased error to cancel out.) It’s the idea that a community can behave like a complex system and manifest emergent behavior.” Complex system/emergent behavior works pretty well. Wisdom of the crowd not so much.
Perhaps you are just explaining the title, which seems to me overstated (as titles tend to be), problematic, and not what you really discuss. In particular, the “All” lends itself to criticism and logical conundrums that are completely unnecessary.
NG: If you ask 1,000 people to guess how many marbles are in a jar and then take the average of those guesses, you will probably get a very, very good estimate. In that sense, they are all right (together), even though (individually) their guesses are probably almost all wrong.
This averaging works where the average person is fairly well-informed about the question.
I we asked the mass of the sun or of a neutron, I don’t think the averaging works nearly as well, if at all.
This is why the method depends on how good you think people’s intuitive judgment of right and wrong is. Some think we’re mostly not psychopaths, so its pretty good.
Others look at history, and see slavery, genocide, war, torture and that no leader ever threw a torture party and had nobody come and think we’re pretty bad at it, so the averaging won’t work.
Ben H. thinks its a mixed bag. I agree, but with the caveat that we never really know when its working and when its not working.
I would say that something like what you have in mind is actually taught in the Book of Mormon, Nathaniel, in 2 Nephi 29:8-14. God’s words are never finished, and he speaks to all nations, and eventually all of those words will be gathered together. We could read it as just a matter of confirming the same message through multiple sources, but the fact that God rebukes those who think they have all they need already suggests that the messages say more together than they do individually.
Chris (and Mtnmarty)-
You’re right on both points:
1. The wisdom of crowds example is solely to explain the title.
2. The title is overstated.
I’d just like to say, however, that the title isn’t overstated merely in an effort to be provocative. There’s a sense of real relief and hope in the idea that we can actually come to a position of saying “Maybe our differences are OK”, of seeing the potential for unity before unanimity. There is for me, anyway.
Thanks, Ben. I hadn’t made that connection before!
Nathaniel, thanks for the reply. What I hear you saying is that the process can handle only superficial disagreement, but that’s all it needs to. I disagree. I like to think that deep down we’d all come to the same conclusions if we just had the same information, but I’m regularly disabused of that notion. I’m compelled by experience to reject any view that claims differences between us are just based on misunderstandings, different experiences, genetic variation, etc. Sincere, fundamental differences exist between earnest people and if a system can’t handle that, then it’s not Zion.
Secondly, it appears to me as if you’re implicitly stating some moral version of the central limit theorem (your example of the marbles in a jar brought this to mind). I’m not convinced that a moral version of that theorem exists, at least not in the axiomatic system I operate in.
I’m compelled by experience to reject any view that claims differences between us are just based on misunderstandings, different experiences, genetic variation, etc.
Brian, its a big jump from saying people don’t agree with the same information to saying they don’t agree even with the same experiences and genetics added in. I think experiences and chemistry(not just genetics but the entire make up of our bodies) explain why we disagree.
The central limit theorem would see to apply if one was interested in the average moral opinion. But this is where I don’t understand the idea of universal, objective morality. It is never explained how the human brain interacts with and understands this universal morality in a way that produces the distribution of moral beliefs that we see.
On the other hand, there is more and more evidence that differences in how people experience anxiety, risk and disgust correlate with the moral positions they hold. Equating the average fear response with “optimal fear response on which to infer universal morality” seems like a bit of a stretch.
Mtnmarty, would you mind explaining why you make the claim in your second paragraph of 19? Why wouldn’t we disagree even with the same chemistry and experiences? Perhaps not all problems have an optimal solution.
Ahh, you haven’t seen my essay, have you, on how there is no elephant.
Each of us is constrained by our context, but that doesn’t mean that there is a superior viewpoint, just a different one.
Almost as if there were macro quantum states that applied to God.
Brian,
I guess I believe that our chemistry and our experiences are all we’ve got. If those are the same then I think the people would be the same. Unlike Nathaniel I don’t think there is anything else to make someone have unique choices.
I think to have a moral belief is to have a brain have a particular type of experience.
Mtnmarty, that is where we differ. And, as it is an axiomatic difference, I don’t see that any resolution is possible. Perhaps one day we’ll know more about what we *are* and we can return to this conversation with more ground on which to make conjecture.
Brian,
I’ll look forward to that. I agree that its an axiomatic difference if that’s all we are but its an empirical one what moral states we can produce chemically and genetically.
Our current morals and lifespan limit our experiments in this area but what fun it would be if the most empathetic of us bred for 10 or 20 generations and the same with the most agressive and then we’d know better what genetics could do.
So often political arguments are on a slightly different paradigm: each side is _partly_ correct, and thinks the other side is completely wrong. The blind men look at different parts of the elephant and their perspectives don’t overlap at all. In real life, things can be a bit more nuanced (and frustrating) as perspectives do overlap.