Category: Game Theory

Jim Ratliff’s graduate-level course in game theory

Here are 14 chapters of lecture notes from a one-semester game-theory course I taught to students in their second year of the economics PhD program at the University of Arizona during the 1992-1997 period. The material would also be helpful to first-year PhD students learning game theory as part of their microeconomic-theory sequence, as well as to advanced undergraduates learning game theory. I consider the exposition detailed, rigorous, and self-contained.

I no longer teach game theory, so these notes are currently frozen in this state. I’m making them available here because I still get requests for them. I have not updated them to reflect subsequent advances.

These notes are in PDF format. You can download the entire course as a single compressed folder (holding 14 separate PDFs) or you can follow each chapter’s link below in the Course Table of Contents to read the abstract of, and/or download, that chapter.

§ 1 Strategic-form games

Chapter 1 of Jim Ratliff’s graduate-level game-theory course. The strategic form (or “normal form”) of a game is defined by a set of players, the actions available to each player, and each player’s payoffs to combinations of actions. We discuss best-responses to a pure-strategy profile, mixed strategies, expected payoffs to a mixed-strategy profile, the best-response correspondence, and best-response mixed strategies.

§ 2.1: Strategic dominance

Chapter 2.1 of Jim Ratliff’s graduate-level game-theory course. Explores nonequilibrium solutions concepts based on the concept of dominance, and its close relative: never a best response. Because a rational player would never play a dominated strategy, a dominance analysis can sometimes rule out some outcomes when the game is played by rational players. Sometimes such a dominance analysis even leads to a unique prediction. In two-player games, dominance and “never a best response” are equivalent. For more than two players, never-a-best-response is stronger.

§ 2.2 Iterated dominance & rationalizability

Chapter 2.2 of Jim Ratliff’s graduate-level game-theory course. Now we add the assumption: not just that the players are rational, but that they know each other is rational; and even: they each know the others know that she’s rational, etc. The infinite hierarchy of such assumptions constitutes “common knowledge.” Common knowledge justifies an iterative process of outcome rejection based on dominance or, alternatively, never a best response—which leads to the solution concept of rationalizability. Outcomes that don’t survive this process cannot plausibly be played when rationality of the players is common knowledge. Although rationality is common knowledge, some players may have had erroneous beliefs; hence some players may have ex post regret about their choice of strategy.

§ 3.1 Nash equilibrium

Chapter 3.1 of Jim Ratliff’s graduate-level game-theory course. When each player correctly forecasts the strategies of her opponents and then plays a best response to that forecast, the resulting strategy profile is a Nash equilibrium. Although a game need not have a pure-strategy equilibrium, John Nash proved—using Kakutani’s fixed-point theorem—that every game must have a (possibly degenerate) mixed-strategy equilibrium. Attempts to justify Nash equilibrium as either a self-enforcing agreement or the outcome of a dynamic process are problematic and deficient. Further, a Nash equilibrium can be vulnerable to multiplayer defections.

§ 3.2 Computing mixed-strategy Nash equilibria of 2 x 2 strategic-form games

Chapter 3.1 of Jim Ratliff’s graduate-level game-theory course. We learn how to compute the set of mixed-strategy Nash equilibria for 2×2 strategic-form (“matrix”) games. For each player, we assign a mixing probability (e.g., p) to one of her strategies (and assign 1−p to the other). We determine each player’s best-response correspondence, which specifies her optimal pure strategy (or mixture over both strategies) as a function of the opponent’s mixing strategy. The Nash equilibria of the game are the points in the intersection of the players’ best-response correspondences.

§ 4.1 Introduction to extensive-form games

Chapter 4.1 of Jim Ratliff’s graduate-level game-theory course. The extensive form of a game can capture complex temporal and informational structure that the strategic form cannot. The extensive form elaborates upon a tree of nodes, each of which belongs to a specific player and at which that player has a defined set of actions. An information set is a set of nodes all belonging to a given player between which the player cannot distinguish when having reached one of those nodes. Collectively, the information sets define what each player knows at each point in the game. We discuss the assumption of perfect recall. We define the crucial concept of a subgame.

§ 4.2 Strategies in extensive-form games

Chapter 4.2 of Jim Ratliff’s graduate-level game-theory course. We define a strategy for a player in an extensive-form game as a specification for each of her information sets of the (pure or mixed) action she would take at that information set. One such strategy for each player constitutes a strategy profile for the extensive-form game. Every extensive-form game can be expressed as a strategic-form game. We define how to restrict an extensive-game strategy to a particular subgame. We incorporate uncertain exogenous events into the extensive form by introducing Nature as a nonstrategic player who acts randomly. We distinguish between two different types of randomized strategies in extensive-form games: behavioral strategies and mixed strategies.

§ 4.3: Solution concepts in extensive-form games

Chapter 4.3 of Jim Ratliff’s graduate-level game-theory course. A rational player should also be sequentially rational: Her planned action in any situation and point in time must actually be optimal at that time and in that situation given her beliefs. We decompose an extensive-form game into a subgame and its complement, viz., its difference game. Then we learn how to restrict extensive-form game strategies to the subgame. We define the solution concept of subgame-perfect equilibrium as a refinement of Nash equilibrium that imposes the desired dynamic consistency. We use Zermelo’s backward-induction algorithm to prove that all extensive-form games of perfect information, i.e., where every information set contains exactly one decision node, have a pure-strategy subgame-perfection equilibrium. This algorithm also provides a useful technique for finding the equilibria of actual games.

§ 5.1 Introduction to repeated games

Chapter 5.1 of Jim Ratliff’s graduate-level game-theory course. A repeated game is the repetition of a strategic-form “stage game,” either finitely many or infinitely many times, where the player’s payoffs to the repeated game are the sum (perhaps with discounting) of their stage-game payoffs. We define the concepts of Nash equilibrium and subgame-perfect equilibrium for repeated games. We show that any sequence of stage-game Nash equilibria is a subgame-perfect equilibrium of the repeated game. For finitely repeated games, we exploit the existence of a final period to establish a necessary condition for a repeated-game strategy profile to be a Nash equilibrium: the last period’s play must be Nash on the equilibrium path; sublime perfection requires that the last period’s play be a stage-game Nash equilibrium even off the equilibrium path. When the stage game has a unique Nash equilibrium, the subgame-perfect equilibria of the finitely repeated game are any period-by-period repetitions of that stage-game equilibrium.