11 October 2016
San Francesco - Via della Quarquonia 1 (Classroom 2 )
Abstract: In this paper we study the long run convention emerging from stag-hunt interactions when errors converge to zero at a rate that is positively related to the payoff earned in the previous period. We refer to such errors as condition-dependent mistakes. We find that, if interactions are sufficiently stable over time, then the payoff-dominant convention emerges in the long run. Moreover, if interactions are neither too stable nor too volatile, then the risk-dominant convention is selected in the long run. Finally, if interactions are quite volatile, then the maximin convention emerges even if it is not risk-dominant. We introduce the notion of condition-adjusted-risk-dominance to characterize the convention emerging in the long run under condition-dependent mistakes. We contrast these results with the results obtained under alternative error models: uniform mistakes, i.e., errors converge to zero at a rate that is constant over states, and payoff-dependent mistakes, i.e., errors converge to zero at a rate that depends on expected losses.