Right now, I’m looking at Richard Layard’s Happiness. He’s an unreconstructed Benthamite, and his view seems to be that evidence on the neurological reward system provides an account of objective utility. And because there’s a neurological correlate to utility, we should think of utilitarianism as the most scientifically respectable of all moral theories, and use it as a guide to social policy, in just the way Bentham intended.
This got me wondering: is the reward system unitary, with a single architecture, or is the reward system implicated in different ways by different cognitive programs or difference kinds of decision tasks. (One possibility is that pleasure/benefit is determined by different systems than pain/costs, and so it may not be that units of plan and units of pleasure trade off in any simple on-to-one sort of way.)
In this article from Nature Reviews, neuro-ethicist Bill Casebeer argues that a virtue-theoretic approach best captures what’s going on in the brain. Moral judgment and motivation is not in all (most?) cases driven by judgments of utility. For example “hot” judgments in social contexts activating theory-of-mind systems probably don’t implicate systems that would calculate either individual or collective expected utility.
This may be important for a number of reasons. The most interesting to me has to do with possible conflicts social policy that is designed to maximize expected social utility and the affective/motivational systems that actually drive behavior. Rawls’s argument against utilitarianism, in a nutshell, is that it is inconsistent with our “sense of justice” and thus utilitarian principles will not gain our willing compliance, and will therefore fail to establish a stable social order. The utilitarian can retort that motivational dispositions are a constraint that utilitarianism must take into account. But then it seems that the principles of utility basically end up mirroring the principles that underlie actual human motivation, which will be doing all the work. At which point it seems otiose to say that what we’re trying to do with policy is maximize happiness, when it would just be more accurate to say that we’re trying to come up with principles people take themselves to have a reason to endorse, where those reasons are only sometimes reasons of utility. The fact that the dopaminergic system or whatever lights up whenever we do whatever we do has nothing interesting to do with what we take to be valuable, or what we should be shooting for socially.
I guess I’m trying to say something to the effect that nothing about the brain actually helps a utilitarian like Layard justify a Benthamite approach to social policy. The reasons for rejecting utilitarianism were never that we don’t know where utility is in the brain, but that it wreaks havoc with native moral judgment and cuts against the grain of our motivational dispositions. Brain science helps us understand why this is the case. We are natural-born Aristotelians (or maybe Humean sentimentalists) unlikely to be moved by comprehensive schemes of utility maximization. Does anyone who might know think the evidence supports this argument?