Last lecture, I started off talking about political theory, and then did a little jump to decision theory–how to rationally make decisions. I explained two ideas–the idea of average benefits when deciding under risk with known probabilities, and the idea of maximizing the worst outcome when deciding under totally uncertainty, with no probabilities. Now I want to talk about the rationales for these ideas.
The idea of having to make a decision from known probabilities comes from two main places: gambling, and insurance. Start with gambling. The idea of a fair coin, fair die, fair roulette wheel, is that in the long run each of the possibilities will come up an equal number of times, or at least in some known proportion of times. (Write “heads” on one side of a die, and “tails” on the other five sides, and you’ve got something equivalent to a coin weighted exactly so that it comes up tails five of six times, with this fact being known.) So since the whole idea of the probabilities involved is about what happens in the long run, you can use that to make decisions about how your choices will turn out in the long run. For example, if you have a bet that says you may $5 on “heads” and lose $4 on “tails,” and you take that bet a hundred times, you should make about $50 bucks: 50 heads times $5 is $250, 50 tails times negative $4 is negative $200 dollars, add the two and you get $50 dollars. There’s some chance a fluke would throw this off, but the chances of a major fluke go down the more times you bet. And the “average result” rule tells you the right thing here: $5 times one-half chance of heads minus $4 times one-half chance of tails is 50 cents, which is positive, telling you you should take the bet. Same result as looking to the long run.
The situation with insurance is similar, though it tends to be more all at once than over the long run. The idea is the insurance agent has statistics that give him a pretty good idea of how many of his clients will collect so much money in a financial period, and then he knows how much to charge each of his clients to make a profit–and he can charge the higher risk ones more. He has to charge them more, because if the low-risk ones leave, but they take away as much money away as the high risk ones would, that would throw off the balance sheets. Really, the math is the same as with coin flips, but you divide people into more groups, and want to make money off all groups. Think of it as like juggling bets with coins, dice, cards, and a roulette wheel, all at the same time, with different payoffs for different bets. But the math is the same. Does that make sense?
Smart people will be asking a question now: how do you account for the customers of insurance companies? Are they just dumb, like people who play the lottery? Remember, it wouldn’t be theoretically impossible to steal the stats used by insurance companies and use them to make the decision.
You might just say here that the averaging rule only applies when you have a lot of different opportunities, so that things are likely to really average out. But that’s not what most philosophers who’ve written on decision making have actually said.
The first step to understanding why requires dealing with the insurance problem, which involves the idea of diminishing returns, and the distinction between money and real utility. The idea is that not every dollar you make is worth the same amount in terms of your well being. This is actually very easy to see in the insurance case: the money that goes towards the heart surgery you need counts more towards your real well-being than the money that goes towards the boat you don’t need, or any similar frivolous luxury. So you don’t have to be especially risk adverse to forgo frivolous luxuries in order to afford health insurance. That’s the distinction between money and real happiness. The standard idea is you have a vague sort of quantification of how much happiness something, and that’s what you should plug into the average utility function. Also, in general any given dollar will be less useful to you when you have more money. $10,000 a year would be a lot to most of you college students sitting here in class, but it would be barely worth keeping track of for someone like Bill Gates or Warren Buffett.
Why accept this approach to the problem of risk? Why not just accept some risk-minimizing principle, like the maximin principle for total uncertainty? Perhaps most obviously, because we think the size of a risk matters. The size of a risk matters even when we are dealing with our own lives. A 50% chance of death, except in the most extreme of circumstances, is completely unacceptable, whereas a 0.1% chance of death might be acceptable. I won’t give up driving, or walking and biking where I occasionally have to cross the street, or flying places in planes, because of the risk of dying in an accident. Notice that with chances of our own deaths, there is no way to make back our money in the long run, as with gambling examples. Also notice that this shows rhetoric about the absolute importance of life is misleading. There is a certain reduction in my quality of life that I would not accept to avoid a 0.1% chance of death. I imagine the same is true for all of you. Some economists have actually taken to calculating the value people place on their own lives, calculating the dollar value on a life, based on what kind of risks people are willing to take with their lives for economic gain. Finally, notice how impossible it is to totally eliminate risk. Even when you’re averaging your betting over 10,000 die rolls, there’s still a possibility that you could loose every bet.
All this means that we need a way of weighing risk, even when we might not be able to earn back what we lose over the long run. The average expected utility principle seems like a pretty good principle for doing this.
Now: what about when you don’t have statistics, or exact probabilities? It seems plausible to a lot of people here that you should try to minimize what you’d lose. If one possible outcome is death, and you have no idea how large the risk of that is, it makes more sense to go way out of your way to avoid that. It’s plausible in a way that it’s not plausible to go way out of your way to avoid a small known risk of death.
In his writings on justice, Rawls goes a little further in arguing that the maximin principle is appropriate to the original condition. He doesn’t insist that it’s appropriate to all decision making under uncertainty, but argues it’s appropriate to the scenario he envisions for two additional reasons: we’re in the position to give everyone in society a decent basic standard of living, and stuff above this basic standard isn’t actually worth all that much.
There’s an important challenge to the maximin principle here. Some philosophers claim that really, we don’t need any special principles for decision under total uncertainty. They argue that when you’re given possibilities without probabilities attached, you should treat all possibilities as equally likely.
Two objections: one, if you have no idea what the probabilities are, where do you get off making assumptions about the probabilities? More concretely, we can see a lot of situations where it’s clearly fallacious to try to estimate the probability of something by counting the ways it could happen, but that seems like what the principle of indifference would do.
I confess I’m not really sure what to make of that debate. But I actually don’t think it’s that important: to worry to much about what to do in situations of absolute uncertainty misses the point. We never find ourselves in a case of absolute uncertainty. We may rarely know exact probabilities, but we can get an idea of where they lie. Most people don’t have memorized statistics on airline safety, and some are terrified of flying. But even the ones who are terrified of flying, who greatly overestimate the odds of dying in an airline crash (airplanes are actually safer than cars, by the way)… even those who greatly overestimate the risk of flight can still guess that it’s safer than playing Russian Roulette. On the other hand, even when you have what look like sure statistics, the statistician could actually be incompetent, or the dealer could be cheating. There’s no way to get an exact number on the probability of your statistics being reliable.
What about mistakes people make in estimating probability? What about the argument that you shouldn’t estimate probabilities by counting possibilities? Well, this may not be as good a method as getting the statistics, but you wouldn’t expect an estimate to be perfect. Estimates are for when you can’t get a perfect measurement.
Finally, getting back to Rawls. He asks us to imagine designing a society without knowing where in society we will be. He says that we should act as if we have no idea what the odds are of our being any particular member of society–but why not treat it as if there’s an equal chance you being each person? Rawls’ “veil of ignorance” looks like the ideal situation to apply the principle of indifference, no matter what you think of other situations.
What does accepting average utility in this case mean for social policy? I think the results are very intuitive. It means that it’s OK to enact an economic plan that will benefit most people greatly, even if a few people lose their jobs. It’s OK to enact welfare reform that will benefit most people greatly, even if a few people lose their welfare benefits. Do you see why? If you’re sitting under the veil of ignorance, a policy that hurts a few people and benefits a lot is a pretty good bet, because you’re more likely to be one of the lot than one of the few–though of course, as with insurance, you might also accept a guaranteed small cost to avoid a risk of something horrible.
Rawls on the other hand, would say you should apply the maximin principle, and worry a lot about what would happen if, once the veil of ignorance were lifted, you turned out to be one of the least-advantaged members of society. Therefore, you can’t enact a policy that will benefit most people no inequalities that don’t benefit the worst-off. That prohibits both of those sorts of policies to benefit most people. There’s a sense in which following Rawls’ difference principle could actually increase overall inequality, if you screw your middle class for the sake of a few very poor people–Rawls could be forced to recommend that, under the right circumstances. That seems like an extreme, counter-intuitive view.
The result of applying the average expected utility principle to the “veil of ignorance” is a basically consequentialist political philosophy. It allows for generally beneficial policies that make some people worse off. But it also allows for redistribution of wealth. Remember what I said about diminishing returns? The money a rich person spends on a yacht could do more good spent on education or health care for the poor. The ultimate policy outcome, from the consequentialist spin on Rawls, is something most people accept at least partially: no one in the United States wants to stop taxing rich people to pay for poor people’s high schools. This doesn’t mean we should completely redistribute the wealth, you have to leave people with an incentive to work. But it means it’s okay to tax the rich at 15, 20, 30 percent, whatever tax rate gives the best results, in order to pay for things like education and health care. Again, from the veil of ignorance: better to be guaranteed $10,000 for your education, than have a 1 in 100 chance at $1.1 million (if you were just maximizing money, you’d take the $1.1 million, but maximizing non-monetary utility it can make sense to get the money for education.) You don’t want redistribution to put too much of a damper on the workings of the economy, but a little bit of economic inefficiency becomes OK if it means most people are getting a decent standard of living as opposed to a few people getting yachts.
Next time, we’ll talk about the idea of a free market, something I probably actually should have hit before Rawls. Then we’ll talk about the stricter forms of libertarianism that have come out of Robert Nozick, who is often cited as the number 2 political philosopher of the 20th century, after Rawls.
Hey, i thought you might want to check out Barefoot Bum’s take on free-markets.
http://barefootbum.blogspot.com/2008/08/communism-and-free-markets.html
http://barefootbum.blogspot.com/2008/04/property-or-myth-of-free-market.html
and a few nuggets of wisdom on economics:
http://barefootbum.blogspot.com/search/label/economics