Over at Andy's, we've been discussing the dilemma of the "consequentialist" who thinks her moral views, while true, would create bad results if widely believed.
At the risk of sounding didactic, I'll back up and try to explain the jargon (especially since I might be misusing it). A consequentialist is someone who thinks that the only things that matter morally are the consequences of actions. The most familiar kind of consequentialism is utilitarianism, which evaluates those consequences on how they promote average or total happiness or preference satisfaction.
It is fairly well-established that humans do not, in fact, think about moral issues like consistent consequentialists. It matters to us whether harm was intentional, and whether it was the result of an act or omission, and whether the person who suffered it was the kind of person the actor owed loyalty to. Lies that cause no harm are still considered at least presumptively wrong. Most people think at least some kinds of consensual sex are morally problematic. Etc.
It isn't immediately obvious that this fact about our moral psychology is a problem for consequentialism. After all, our intuitive sense of astronomy conflicts with Copernicus and our intuitive sense of physics with Einstein. So it could just be that our "natural" assumptions about morality are wrong.
But what if this fact about our moral psychology means that we will actually do more harm if we try to reason as a consequentialist than we would if we used our common sense? In addition to the fact that we have moral intuitions, it is equally well-established that these are easily biased when our interests are involved. It might be that a non-intuitive consequentialist morality would be even more easily biased. Rationalizing "thou shalt not steal" may be harder than "thou shalt steal iff. the consequences of stealing on aggregate/average well being are greater than if one had not stolen."
One lesson from Hayek is that in conditions of radical uncertainty about other people's knowledge, rules-of-thumb may work better than a rationalism that implicitly assumes omniscience. Even some consequentialists seem to have thought that humans make bad consequentialist moral reasoners -- IIRC, both Stephens and Mill thought this. Robert Wright, in The Moral Animal, seems to conclude both that utilitarianism is right about the moral facts and that natural selection has provided us with a non-utilitarian moral sense and a strong ability to rationalize. He doesn't quite say that utilitarianism is bad for us, but I think it is implicit that it very well may be.
Personally, I am inclined to think that moral truth just is what normal humans would tend to converge on if they underwent both Rawlsian reflective equilibrium and a Habermasian ideal speech situation. If so, then I think thorough-going consequentialism is false. But I can respect the dilemma of a consequentialist who has thought through the implications of moral psychology. She'd have to conclude that we just can't handle the truth.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment