All this collective action problem debate is delightful. Here are some not-very-structured musings….
The topic of my unwritten dissertation was how solutions to “the contractarian compliance problem” (the fact that an individual can often do better for herself by ignoring moral constraints on self-interest that, if generally heeded, more than compensate for the short-term sacrifices moral constraints require), and the boundary between ideal and non-ideal political theory, turn on assumptions about human motivation that are open to empirical investigation. My position was (and is) that both pure rational choice (as represented by James Buchanan) and modified rational choice (as represented by David Gauthier) are less satisfactory as a matter of empirical psychology than a more deeply-moralized conception of motivation (as represented by John Rawls), but that rationalist accounts of the “moral capacity” or “sense of justice,” like Rawls’s, are also inadequate (in part because of the failure of the Universal Grammar analogy and in part because of naivete about the power of the moral sense to regulate self-interest in many contexts, especially politics).
Anyway, the point is that I don’t accept strict rational choice reasoning about collective action problems. Indeed, I think the fact that we do successfully solve so many of them basically refutes strict rational choice assumptions. (Even if coercion needs to come in to solve a coordination problem, you’ve got to ask why the guys with the guns are doing what they are supposed to, and not just using their powers to plunder, etc.) But if we’re talking about whether or not a certain constraint on self-interest ought to be normatively binding, I think you have to ask: Why? Because I’m a soulless, reductive, naturalist, I think there’s a good answer to that: because heeding the constraint will tend to make the person who heeds it better off, conditional on others heeding it, too. This is where a lot of people will part ways from me. They feel uncomfortable seating normativity in individual flourishing. However, I find all the relevant alternatives to be basically religious.
I am entertained by the examples at hand — gifts to the U.S. Treasury, meat avoidance, and carbon minimization — largely because I see people fighting over whether or not to try to establish or reinforce a moral norm, and that is really interesting. I found Henry’s rational choice-style answer to the question of gifts to the government amusing, because it suggests that he is not interested in reinforcing a moral norm that would motivate us to give money voluntarily to the Treasury. But if he wants the government to have more money, why not? Perhaps such a norm of voluntary giving might undermine a sense of the necessity and/or moral legitimacy of coercive taxation, which he believes it is important to maintain. Perhaps he thinks that this is an area where we cannot realistically expect the moral sense to sufficiently regulate self-interest, and so appealing to morality to do a job only coercion can do will be self-defeating. A new set of moral norms might crowd out a more effective coercive solution.
Well, I can buy that as a real possibility. But then I become very interested in how to apply this kind of reasoning to other similar cases. A lot of people seem to want to pursue a joint moral-coercive strategy to carbon emissions. Might that be self-defeating? Or is it supposed that an optimal carbon tax is politically infeasible without some moral ground-softening? Ethical vegetarians can be very evangelical but don’t seem to be very interested in banning or taxing meat at all. Why not? Maybe all these subjects are more dissimilar than I’m assuming. Then how so?
My philosophy leaves me very skeptical that norms about any of these things (much less coercively-enforced rules) would have any justified normative force — would be rationally binding. I don’t think higher taxes in the U.S. will leave the average person better off over time, much less the person who pays them. I have no idea how to tote up the net externality of carbon emissions (I don’t even know if the sign is positive or negative) and neither does anybody else. And since I think morality is for enabling human flourishing, I care about animals only insofar as our attitudes toward them affect patterns of interaction that bear on human well-being.
“Culture wars” are largely ongoing fights about what the governing norms are going to be. Certain kinds of arguments are useful in discouraging people from adopting or internalizing a new norm. I think a lot of rational choice arguments are like that. Because I think a lot of fledgling moral norms are likely to be harmful if they go viral, I like to encourage people to think like an economist, both to help them understand why the norm may not do any good as a matter of fact, but also to promote a generally inhospitable psychological climate for faddish moral memes.
Did you really read this far?