Game theory

Game theory

From Wikipedia, the free encyclopedia.

This article discusses the mathematical modelling of incentive structures. For other games (and their theories) see Game (disambiguation).

Game theory is a branch of mathematics that uses models to study interactions with formalized incentive structures ("games"). It has applications in a variety of fields, including economics, evolutionary biology, political science, and military strategy. Game theorists study the predicted and actual behavior of individuals in games, as well as optimal strategies. Seemingly different types of interactions can exhibit similar incentive structures, thus all exemplifying one particular game.

John von Neumann and Oskar Morgenstern first formalized the subject in 1944 in their book Theory of Games and Economic Behavior.

The psychological theory of games, which originates with the psychoanalytic school of transactional analysis, remains a largely unrelated area.

Contents [hide]

Relation to other fields

Game theory has unusual characteristics in that while the underlying subject often appears as a branch of applied mathematics, researchers in other fields carry out much of the fundamental work. At some universities, game theory gets taught and researched almost entirely outside the mathematics department.

Game theory has important applications in fields like operations research, economics, collective action, political science, psychology, and biology. It has close links with economics in that it seeks to find rational strategies in situations where the outcome depends not only on one's own strategy and "market conditions", but upon the strategies chosen by other players with possibly different or overlapping goals. Applications in military strategy drove some of the early development of game theory.

Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. And computer scientists have used games to model interactive computations. Computability logic attempts to develop a comprehensive formal theory (logic) of interactive computational tasks and resources, formalizing these entities as games between a computing agent and its environment.

Game theoretic analysis can apply to simple games of entertainment or to more significant aspects of life and society. The prisoner's dilemma, as popularized by mathematician Albert W. Tucker, furnishes an example of the application of game theory to real life; it has many implications for the nature of human co-operation.

Biologists have used game theory to understand and predict certain outcomes of evolution, such as the concept of evolutionarily stable strategy introduced by John Maynard Smith and George R. Price in a 1973 paper in Nature (See also Maynard Smith 1982). See also evolutionary game theory and behavioral ecology.

Analysts of games commonly use other branches of mathematics, in particular probability, statistics and linear programming, in conjunction with game theory.

Types of games and examples

Game theory classifies games into many categories that determine which particular methods one can apply to solving them (and indeed how one defines "solved" for a particular category). Common categories include:

Zero-sum and non-zero-sum games

In zero-sum games the total benefit to all players in the game, for every combination of strategies, always adds to zero (or more informally put, a player benefits only at the expense of others). Go, chess and poker exemplify zero-sum games, because one wins exactly the amount one's opponents lose. Most real-world examples in business and politics, as well as the famous prisoner's dilemma are non-zero-sum games, because some outcomes have net results greater or less than zero. Informally, a gain by one player does not necessarily correspond with a loss by another. For example, a business contract ideally involves a positive-sum outcome, where each side ends up better off than if they did not make the deal.

Note that one can more easily analyse a zero-sum game; and it turns out that one can transform any game into a zero-sum game by adding an additional dummy player (often called "the board"), whose losses compensate the players' net winnings.

A game's payoff matrix represents convenient way of representation. Consider for example the two-player zero-sum game with the following matrix:

                                Player 2                     Action A    Action B    Action C          Action 1    30         -10          20 Player 1          Action 2    10          20         -20

This game proceeds as follows: the first player chooses one of the two actions 1 or 2; and the second player, unaware of the first player's choice, chooses one of the three actions A, B or C. Once the players have made their choices, the payoff gets allocated according to the table; for instance, if the first player chose action 2 and the second player chose action B, then the first player gains 20 points and the second player loses 20 points. Both players know the payoff matrix and attempt to maximize the number of their points. What should they do?

Player 1 could reason as follows: "with action 2, I could lose up to 20 points and can win only 20, while with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, player 2 would choose action C (negative numbers in the table represent payoff for him). If both players take these actions, the first player will win 20 points. But what happens if player 2 anticipates the first player's reasoning and choice of action 1, and deviously goes for action B, so as to win 10 points? Or if the first player in turn anticipates this devious trick and goes for action 2, so as to win 20 points after all?

John von Neumann had the fundamental and surprising insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimize the maximum expected point-loss independent of the opponent's strategy; this leads to a linear programming problem with a unique solution for each player. This minimax method can compute provably optimal strategies for all two-player zero-sum games.

For the example given above, it turns out that the first player should choose action 1 with probability 57% and action 2 with 43%, while the second player should assign the probabilities 0%, 57% and 43% to the three actions A, B and C. Player one will then win 2.85 points on average per game.


??? Quoted in text above: "and the second player, unaware of the first player's choice, chooses one of the three actions A, B or C". ...but what are the chances that Player 2 will go for Action A if both payoffs resutting from either action of Player 1 are not negative?

Co-operative games

A cooperative game is characterized by an enforcable contract. Theory of co-operative games gives justifications of plausible contracts. The plausibility of a contract is closely related with stability.

Axiomatic bargaining

Two players may bargain how much share they want in a contract. The theory of axiomatic bargaining tells you how much share is reasonable for you. For example, Nash bargaining solution demands that the share is fair and efficient (see an advanced textbook for the complete formal description).

However, you may not be concerned with fairness and may demand more. How does Nash bargaining solution deal with this problem? Actually, there is a non-cooperative game of alternating offers (by Rubinstein) supporting Nash bargaining solution as the unique Nash equilibrium.

Characteristic function games

Many players, instead of two players, may cooperate to get a better outcome. Again, how much share should be given to each player of the total output is not clear. Core gives a reasonable set of possible shares. A combination of shares is in a core if there exists no subcoalition in which its members may gain a higher total outcome than the share of concern. If the share is not in a core, some members may be frustrated and may think of leaving the whole group with some other members and form a smaller group.

Games of complete information

In games of complete information each player has the same game-relevant information as every other player. Chess and the prisoner's dilemma exemplify complete-information games, while poker illustrates the opposite. Complete information games occur only rarely in the real world, and game theorists usually use them only as approximations of the actual game played..

Risk aversion

For the above example to work, one must assume risk-neutral participants in the game. For example, this means that they would value a bet with a 50% chance of receiving 20 points and a 50% chance of paying nothing as worth 10 points. However, in reality people often exhibit risk averse behaviour and prefer a more certain outcome - they will only take a risk if they expect to make money on average. Subjective expected utility theory explains how to derive a measure of utility which will always satisfy the criterion of risk neutrality, and hence serve as a measure for the payoff in game theory.

Game shows often provide examples of risk aversion. For example, if a person has a 1 in 3 chance of winning $50,000, or can take a sure $10,000, many people will take the sure $10,000.

Lotteries can show the opposite behaviour of risk seeking: for example many people will risk $1 to buy a 1 in 14,000,000 chance of winning $7,000,000.

Games and numbers

John Conway developed a notation for certain complete information games and defined several operations on those games, originally in order to study Go endgames, though much of the analysis focused on Nim. This developed into combinatorial game theory.

In a surprising connection, he found that a certain subclass of these games can be used as numbers as described in his book On Numbers and Games, leading to the very general class of surreal numbers.

History

Though touched on by earlier mathematical results, modern game theory became a prominent branch of mathematics in the 1940s, especially after the 1944 publication of The Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. This profound work contained the method -- alluded to above -- for finding optimal solutions for two-person zero-sum games.

Around 1950, John Nash developed a definition of an "optimum" strategy for multi-player games where no such optimum was previously defined, known as Nash equilibrium. Reinhard Selten with his ideas of trembling hand perfect and subgame perfect equilibria further refined this concept. These men won The Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel in 1994 for their work on game theory, along with John Harsanyi who developed the analysis of games of incomplete information.

Research into game theory continues, and there remain games which produce counter-intuitive optimal strategies even under advanced analytical techniques like trembling hand equilibrium. One example of this occurs in the Centipede Game, where at every decision players have the option of increasing their opponents' payoff at some cost to their own.

Some experimental tests of games indicate that in many situations people respond instinctively by picking a 'reasonable' solution or a 'social norm' rather than adopting the strategy indicated by a rational analytic concept.

The finding of Conway's number-game connection occurred in the early 1970s.

Specific Applications

See also Mathematical game; Artificial intelligence; Newcomb's paradox; game classification.

External links and references

登录后才可评论.