Utilitarians seek "the greatest good for the greatest number" by trying to compute some measure of the goodness of a situation and then maximizing it. But what if some people (or creatures, perhaps hyper-intelligent aliens) exist who have such huge utility functions compared to others that they dominate the total? That is, what if somebody, call her "Lola", were so sensitive ("feels it more") that any course of action which didn't please her caused the net utility to dip? ("What Lola wants, Lola gets!") Maximizing that kind of utility may be fine for Lola, but it's not likely to please the rest of us.
Perhaps we need to get away from the linear or additive utility notion and consider a multiplicative utility --- not the sum, but the product of each person's feelings? Then when any individual's utility is zeroed out, the global product goes to zero. Utility monsters like Lola can't take control any more (short of going to infinity, which spoils the game). Another escape route might be to change the method of combining individual numbers to use the median, the mode, or maybe just the minimum of all values. That would a premium on risk-avoidance, and make the goal one of maximizing outcomes for the least fortunate. (John Rawls's "Theory of Justice" takes something like this approach at a societal level.)
The overall question is fairness --- what is it? How can it be measured? If a choice turns out splendidly for multitudes, but causes horrible pain to one human being, can it be right? Contrariwise, can one person block an action that everyone else urgently desires? How can people work together to make things turn out as well as possible for us all?
Sunday, November 28, 1999 at 19:25:04 (EST) = Datetag19991128
On the subject of utilitarianism and the problem of "utility monsters," I thought that best would be to calibrate the 'utility scale' for each being so as to be the same magnitude (say, 0 to 1), and assume that Lola would just have to live with her stronger feelings. - RadRob