Tyler wants to find a theory that both rationalizes and is consistent with our intuitions. But that is a fool's game. Our intuitions are inconsistent. Our moral intuitions are heuristics produced by blind evolution operating in a world totally different than our own. Why would we expect them to be consistent? Our intuitions provide no more guidance to sound ethics than our tastes provide guidance to sound nutrition. . .
The reason to think deeply about ethical matters is the same reason we should think deeply about nutrition – so that we can overcome our intuitions. Tyler argues that we don't have a good approach to animal welfare only because he is not willing to give up on intuition.
Although Alex is right that our intuitions are not likely to be consistent, there is also no way to completely “overcome” or “give up on” them.
Consider Alex’s story abut Temkin, in which Alex and Robin bullheadedly give the utilitarian answer, despite Temkin's attempt to prime an anti-utilitarian judgment. Alex seems proud of himself. Is this a matter of Alex and Robin having “overcome” intuition? Certainly not! How did Alex and Robin arrive at the conclusion that it is even possible to compare value across persons, or that the prospect for “better” lives for more people may trump or override the respect owed to an individual's autonomy and dignity as a seperate person? What else but intuition!
As Sidgwick illustrated with such clarity, there is simply no getting to something like utilitarianism without a rationally unsupported intuition about the coherence and primacy of “the point of view of the universe.” Now, it is true that once you embrace THIS intuition, you'll certainly be in a position to contradict the intuitions of folk morality. But contradicting judgments derived from one set of intuitions with judgments from another set of intuitions clearly isn't a matter of “overcoming” intuition. And it surely isn’t the basis for a sense of superiority over other people’s wooly-headedness.
(Aside: And shouldn’t Robin, of all people–champion of Bayesian rationalism–when confronted by the credible data that almost all philosophers disagree with him, realize that the probability that he is wrong is exceedingly high, and so admit that he must be the crazy one. If Robin told a moral philosopher who was arguing in favor of the minimum wage that almost all economists disagree with him, wouldn’t he think it would, in some sense, be rationally mandatory for the philosopher to change his mind right there on the spot? Or at least that ongoing disagreement would be dishonest? If this aside is obscure to you, check out Robin and Tyler’s paper about disagreement.)
Tyler is grappling with what seem to me to be Derek Parfit-like concerns. (Tyler has co-authored papers with Parfit.) If Alex accepts something like utilitarian intuitions, as he seems to in the “sacrifice your child for the others” case, he ought to pay attention. Parfit is the most brilliant utilitarian alive, and his lucid exploration of the logic of the utilitarian intuition leads to a number of paradoxes and “repugnant conclusions” that I believe serve as reductio arguments against utilitarianism. Tyler is at least taking his own intuitive normative commitments seriously and thoughtfully exploring their limits and consequences, rather than proudly pretending that he doesn't have any.
I'd like to say a few other things about the method of reflective equilibrium, the human moral sense, and the (large) role for intuition in normative theories, but that will have to wait for later.