Monday, April 22, 2019

Criticisms of Instrumental Rationality

As a follow up to the post Agents are Instrumentally Rational, I thought it would be good to discuss a number of criticisms to instrumental rationality. It's healthy to do so, since political actors are not cold, calculating automatons. In elections, voters do not vote rationally. Politicians may or may not behave rationally. If political actors are (gasp) human, perhaps examining the flaws of rationality will illuminate aspects to better model scenarios. And unlike economists, we are trying to fit theory to reality.

There is a growing literature testing predictions "instrumental rationality" makes. Alarmingly, the vast majority of this literature finds actors are not instrumentally rational.

The basic problem seems to be, expected utility models actors as computers trying to optimize some quantity. But the human brain is more like a sophisticated "pattern recognition machine", and there are builtin "short circuits" to avoid heavy computations but tend to produce false-positives. (This is useful for doing things which do not need heavy computations; it is a feature, sometimes a bug.) This is the pioneering work of Daniel Kahneman and Amos Tversky. There is a wonderful book Thinking: Fast and Slow, by Kahneman himself, summarizing the research.

There are two points of particular concern I'll mention here. In another post, I will go on a philosophical spelunking on where this particular notion of "rationality" comes from (Hume) and the criticisms it has faced in the past.

Allais Paradox

Lets try testing the framework of preferences over prospects, specifically the independence axiom. You have to make one choice between two alternative lotteries:

Lottery 1:
Win $2500 with probability 33%
Win $2400 with probability 66%
Win nothing with 1% probability

Lottery 2: win $2400 with certainty.

Once you made that choice, you need to choose between two more lotteries:

Lottery 3:
Win $2500 with probability 33%
Win nothing with probability 67%

Lottery 4:
Win $2400 with probability 34%
Win nothing with probability 66%

Which would you prefer? What does expected utility suggest?

The expected winnings from Lottery 1 would be $2409, whereas the expected winnings from Lottery 2 is $2400. The instrumentally rational individual would pick Lottery 1, even though most people empirically choose Lottery 2.

Similarly, the expected winnings from lottery 3 is $825, whereas the expected winnings for lottery 4 is $816. But again, people choose lottery 4 over lottery 3.

The Allais Paradox is the observation that, in experiments, volunteer behavior directly contradicts rational behavior. Sugden's Rational Choice: A Survey of Contributions from Economics and Philosophy reviews the literature on this topic.

The conflict is with the axiom of independence. If we use an alternative axiomatization for rational behavior ("Savage's axioms"), the Allais paradox contradicts the "sure-thing principle".

Source of Beliefs

So, how does a rational actor "acquire" beliefs? (An instructive exercise for the reader, harking back to Socrates, is to consider how the reader "acquires" beliefs.)

For some political actors, it doesn't really matter. I'm pretty certain Senator Ted Cruz has beliefs on almost everything, and as far as how he acquired them, well, it doesn't matter.

For other types of political actors, like your "everyday voters", it does matter. Voter belief is actually a hotly debated topic: to what degree voters are "rational actors", where they acquire their party identification or how they develop affinity for a candidate, these are all hot topics.

There is universal agreement in the literature rational agents update their beliefs using Bayes inference. We should recall from probability that Bayes' theorem is a generalization of contrapositive in logic. Heuristically, what happens is we have some mathematical model of the world using random variables and parameters (denoted θ in the literature, possibly a vector of parameters). Some event E occurs, and we use adjust our model's parameters based on the event occurring.

[I don't have the space to describe the details (though I should in some future blog post), the interested reader is encouraged to read John Kruschke's Doing Bayesian Data Analysis for details.]

But where do the initial estimates for prior distributions come from? Where do the "initial beliefs" emerge? There are two answers to this query.

First answer: from the search for information itself. People do not sit around waiting for "information" to fall into their laps. No! Rational actors must actively pursue information. The original "prejudices" (initial beliefs) are adjusted as more information is actively obtained.

When does a rational actor cease seeking information? Economists answer, with either a sigh or a smile, the rational actor will stop when the utility of the information gained equals the cost of the search for that information (with the cost evaluated in utility terms). As long as there is more utility gleaned from seeking than it costs to seek, a rational actor will keep searching. This is a rather cute, self-consistent solution.

But it begs the question: how does an actor know how to evaluate the utility of the new information prior to obtaining it? Perhaps our actor has formulated expectations about the value of additional information. How did our actor acquire that expectation of the value of information?

A waggish defender might say, "By acquiring information about the value of information up to the point where the marginal benefits of this (second-order) information were equal to the costs." This solution really degenerates into an infinite regress, since we can ask the same question of how the actor knows value of this second-order information.

There are two ways to stop the infinite regress:

  1. Something additional is needed. This concedes the instrumental rationality paradigm is incomplete.
  2. The only alternate would be to assume that the individual knows the benefits the actor can expect from a little more search because the actor knows the full information set. But then there is no problem: the actor knows everything already!

I wonder if we might not be more generous with the waggish defender, and try to bootstrap some interpolated polynomial from first-order, second-order, ..., N-order costs of information? My intuition suggests the answer to be "In general, no; only for a few special edge cases can this bootstrap occur coherently."

A last remark: this discussion reminds me of Meno's paradox (not to be confused with Zeno's paradox). In Plato's dialogue, Meno, Meno asks Socrates And how will you inquire into a thing when you are wholly ignorant of what it is? Even if you happen to bump right into it, how will you know it is the thing you didn't know? [80d1-4] Socrates reformulates it thus [A] man cannot search either for what he knows or for what he does not know[.] He cannot search for what he knows—since he knows it, there is no need to search—nor for what he does not know, for he does not know what to look for. [80e] Or, phrased more relevantly for our discussion, how can a rational actor actively pursue information about a matter which the actor is completely ignorant of?

I'm sure the rejoinder would be, "Rational actors seek information precisely how Socrates sought answers to questions on matters he professed ignorance about." But it still dodges the question.

Second answer: Beliefs as purely subjective assessments. This is following Savage's The foundations of statistics (1954), where beliefs are purely subjective assessments. They are what they are, and only revealed ex post by the choices people make.

This avoids a lot of the problems plaguing the first answer. Unfortunately, we have some experimental evidence casting doubt on the consistency of such subjective assessments and more generally on the probabilistic representations of uncertainty; most famous of which is the Ellsberg paradox.

Game theory has been pursuing the line of reasoning Savage provides. But then this may license any kind of action, rendering instrumental rationality nearly vacuous. Game theorists have sought to prevent this "purely subjective assessments" turning against itself [i.e., letting "anything" be a solution to describe rational behavior] by supplementing instrumental rationality with the assumption of the common knowledge of rationality. This leads to weak solutions to game theoretic problems apparently called Rationalizability, not to be confused with the psychological mechanism of "Rationalizing" (i.e., lying to one's self to feel better).

References

  • Shaun Hargreaves Heap and Yanis Varoufakis, Game Theory: A Critical Introduction. Second ed., Routledge.
  • John Searle, Rationality in Action. MIT Press, 2001.
Savage Axioms

No comments:

Post a Comment