Heap and Varoufakis summarize the last assumption of game theory's axiomatization of rational behavior in the "consistent alignment of beliefs" axiom: no instrumentally rational person can expect another likewise rational person who has the same information to develop different thought processes.
This is usually justified by the Harsanyi doctrine: when two rational people examine the same information, they must draw the same inferences, and independently come to the same conclusion.
Robert Aumann fiercely defended this principle in his article "Agree to Disagree" (1976) and his earlier article "Subjectivity and Correlation in Randomized Strategies" (1974).
Aumann argues, if you assess it is going to rain tomorrow with 75% probability and I assess it will rain tomorrow with 33% probability, then we must have different information and we should update our probabilities accordingly until we converge on some shared probability estimate. That is, through dialogue, we (as rational actors) will arrive at a conclusion we both agree upon.
When we combine "consistent alignment of beliefs" with the common knowledge of rationality, we end up with common priors (i.e., a source of beliefs). The connection is this: if you know you are rational and you know your adversary is rational and (using consistent alignment of beliefs) you know your thoughts about what your adversary might be doing have the same origin as your thoughts about your own actions along the same line as your adversary's thoughts, THEN you adversary's actions will never surprise you. Beliefs are consistently aligned in the sense, if you actually were able to know your adversary's plans, you wouldn't want to alter your beliefs about those plans. Conversely, if your adversary knew about your planned actions, then your adversary wouldn't want to alter their beliefs they hold about your prospective actions which underpin their planning about their future actions.
Observe this dialogue needs to happen in "real" (i.e., historical) time and not in "logical time" (in the sense of the length of a logical derivation of hypothetical dialogue). Without such actual dialogue, there's no need to come to any agreement. Scott Aaronson has shown (arXiv:cs/0406061) such dialogue can be done in finite time and, in some sense, "efficiently".
One of the problems with this, the inference of common priors from the premises on the Common Knowledge of Rationality coupled to the consistent alignment of beliefs argues the dialogue occurs in "logical time".
The problem with this is for "one shot games", where interactions between the players occur only once and in the absence of communication, there is literally no opportunity for such dialogue.
Prior Beliefs
We need some "initial beliefs" for our rational actors to have, so as to avoid an infinite regress in reciprocal expectation of actions pursued. We saw how rational actors update their beliefs via Bayesian updates, but we need some "initial prior" to start the process. Without common priors, we can get senseless results.
But the choice of prior probability distributions in Bayesian analysis can impact the posterior distribution considerably. The field of "Reference Priors" uses information theory to measure how the choice of prior distribution affects the posterior probability. The choice of priors has a rich history and while it is true "objective" (or "noninformative") priors have "minimal impact" on the posterior, but that is not the same as "zero impact". Noninformative priors can lead to improper posterior, which is dangerous. How we choose a prior seems to be a hotly contested topic (does the choice of priors "matter"? What is an appropriate way to do it?) which Andrew Gelman has written extensively on.
Even if we restrict ourselves to only "stable" priors, I'm not sure this is much progress.
Revenge of the Nerds German Philosophers
One thing which the German philosophers Kant and Hegel pondered was the self-conscious reflection of human reason upon itself. Can our reasoning faculty turn on itself and, if it can, what can it infer? Phrased more relevantly, when reason knowingly encounters itself in a game, does this tell us anything about what reason should expect of itself?
Hegel's Phenomonology of Spirit (or more generally, his philosophy of Spirit) addresses this train of thought (and more). Further Hegel takes Reason reflecting on reason as it reflects on itself as part of the restlessness which drives history. Outside of history, for Hegel, there are no answers for the question of what one's reason demands of others' reason. History provides a changing set of answers.
Also worth mentioning is that game theory uses "reason" akin to Hume's usage in his famous passage We speak not strictly and philosophically when we talk of the combat of passion and reason. Reason is, and ought only to be the slave of passions, and can never pretend to any other office than to serve and obey them.
Reason is a tool to help achieve the ends of subjective passions. Hegel rejoins in his lectures on the History of Philosophy, in chapter 2 on Hume in particular, In itself reason thus has no criterion whereby the antagonism between individual desires, and between itself and the desires, may be settled. Thus everything appears in the form of an irrational existence devoid of thought; the implicitly true and right is not in thought, but in the form of an instinct, a desire.
Kant's Critique of Pure Reason via his Transcendental Dialectic investigates Reason's excesses. For other Kantian repudiations of game theoretic "reason", see O'Neil's Constructions of Reason (1989), e.g., page 27 et seq.
Conclusion
So we finally have answered the question posed so long ago: beliefs are formed by taking into account common knowledge of rationality coupled to consistent alignments of beliefs. This bootstraps a rational actor's belief system by considering that actor's rational adversary's beliefs which have already solved the riddle of what is the original actor's belief system.
And if that sounds circular...that's because it is...
References
- Shaun Hargreaves Heap and Yanis Varoufakis, Game Theory: A Critical Introduction. Second ed., Routledge. (This is the axiomatization scheme I am following.)
- John Searle, Rationality in Action. MIT Press, 2001. (This provides a different set of axioms for rational behaviour, equivalent to the axioms of game theory, and discusses implicit assumptions & its flaws.)
- S. Morris, "The Common Prior Assumption in Economic Theory". Economics and Philosophy 11 (1995) 227–253. Eprint.
- John Harsanyi, "Games with Incomplete Information Played by 'Bayesian' Players: Part 1, The Basic Model". Management Science 14, 3 (1967) 159–182. Eprint
- Robert J. Aumann, "Agreeing to Disagree" (PDF). The Annals of Statistics 4, 6 (1976) 1236–1239. doi:10.1214/aos/1176343654.
- Scott Aaronson, Common Knowledge and Aumann’s Agreement Theorem [blogpost]
- Scott Aaronson, "The Complexity of Agreement". Proceedings of ACM STOC (2005) pp. 634–643, eprint arXiv:cs/0406061
No comments:
Post a Comment