OpaqueJustice

 

The Law is a lovely and brilliant invention: a set of rules, defined in advance and visible to all, that define "right" and "wrong" in social interactions — a huge step forward from arbitrary decisions by a boss-man when conflicts arise. And the payoff of a system of Law isn't just in terms of greater individual happiness. With open and stable rules in place, people can make long-term investments; capital can accumulate; productivity, and thereby general prosperity, can grow. As a businessman involved in international trade explained to me, it doesn't even matter (much) what the rules are, as long as they're predictable. Commerce can adapt to almost anything.

But unfortunately the Law, as currently implemented, is quite an ass when uncertainty is involved. Consider a situation where there's a small probability of something horrible happening. Is it legitimate to impose sanctions or penalties right now, preemptively, to prevent that bad thing? Contrariwise, how fair is it to retroactively fine someone for damages that were unforeseeable, based on information available at that time?

Some cases are more straightforward than others. If your state-of-the-art explosives factory blows up on the average every N years and does X amount of damage, then most people would agree that it's only fair to ask you to pay (at least) X/N each year into a fund to compensate future victims. It certainly feels unjust to let you run an operation naked, without any cushion, and then see you go bankrupt when there's a disaster — even though you make more money in the short run.

Well and good ... but that was an easy case. Consider other dimensions of uncertainty. Suppose it is somehow known, to a certainty, that among a list of 500 people there are 20 who plan to do something that will surely kill 3000 innocent victims. Can 500 suspects, 480 of whom are perfectly innocent, be inconvenienced to save the lives of 3000? How much inconvenience may be imposed? What kind of compensation should the 480 receive? Nobody (maybe!) would want to murder 500 human beings to save 3000 — but many would agree to annoy 500 slightly. What's the right balance point?

And more interestingly: what if there's only a 10% chance of such an evil plot? Or less than a 0.01% chance, making the likely lives saved less than 1? What's "justice" now? And getting into still murkier situations, suppose that the very act of aggressive investigation causes the plot to be cancelled — a self-negating prophecy? Now there are not, and never were, any "guilty" parties.

Or to broaden the field of scenarios, think about the fuzzy trade-offs between widespread ownership of guns (and the associated direct risks of injury) ... versus unknown indirect reductions in the rates of burglary, rape, etc. ... versus even-less-knowable changes in national defense odds against hypothetical invaders? Or consider the visible deaths from automobile accidents, versus the invisible lives saved from economic efficiencies by lower transportation costs. What's the right trade-off there?

Different societies, and different groups in a single society, can come to quite different conclusions. And in a sense, in order to respond to probabilistic hazards we already accept many annoyances every day. We carry drivers' licenses and passports to prove who we are. We allow law enforcement officers to ask us questions even when we're not guilty of any crime. These (usually) minor hassles are accepted as part of the cost of living together.

Now add public policy into the mix, for an even stranger brew. Sometimes a society may absolutely refuse to even consider certain important inputs to a correlation function. For instance, females and males seem to have different mortality rates. An actuary would deduce that a life insurance policy should cost less for the sex with the longer life expectancy, but that contrariwise an annuity should cost more. Is that OK, or discriminatory? How about slicing the risk pool along other dimensions such as race or religion? Unfair, even if statistics say otherwise? Yet it's (almost) universally accepted that, for instance, age is a legitimate differentiator. A teenager doesn't pay as much for term life as a centenarian, for instance. Automobile liability insurance is biased the other way. If the Law demanded equal insurance rates for all, how many policies would get written?

A possibility: in areas where there's a need to discriminate but a deep social aversion to it, could help come from an opaque algorithm — a procedure (like some neural nets) where inputs are linked to outputs via inscrutably complex black-box relationships? No human sets the algorithm's parameters. They evolve from a set of training examples and include random elements. If the outcome seems to have a statistical bias in one direction or another it's just a consequence of the inputs and the luck of the draw ... not a deliberate act of human prejudice.

Such opaque systems are already used, to some extent, in picking tax returns for audit, airline passengers for inspection, or loan applications for approval. Should that be stopped? Must all such choices be made in broad daylight via transparent sets of rules? Even if exposing the rules to inspection promotes gross evasion by cheaters?

Tough questions, when utilitarian principles clash with other civilized values. What's the best answer?


TopicJustice - TopicScience - 2002-01-29



I used to think that Sweden was the land of social experiments writ large. At present it seems as if the US is firmly on the bleeding edge, in particular totally revising its traditional views on individual freedoms. – Bo Leuf


(correlates: TemporalUtilitarianism, LessMore, ChatTuringTest, ...)