Eddie Dekel, Drew Fudenberg and David K. Levine
June 20, 2001
Abstract: This paper discusses the implications of learning theory for the analysis of Bayesian games. One goal is to illuminate the issues that arise when modeling situations where players are learning about the distribution of Nature's move as well as learning about the opponents' strategies. A second goal is to argue that quite restrictive assumptions are necessary to justify the concept of Nash equilibrium without a common prior as a steady state of a learning process.