Home | Publications | CS Home

Learning to Bid in Bridge


Asaf Amit and Shaul Markovitch. Learning to Bid in Bridge. Machine Learning, 63:287-327 2006.


Abstract

Bridge bidding is considered to be one of the most difficult problems for game-playing programs. It involves four agents rather than two, including a cooperative agent. In addition, the partial observability of the game makes it impossible to predict the outcome of each action. In this paper we present a new decision-making algorithm that is capable of overcoming these problems. The algorithm allows models to be used for both opponent agents and partners, while utilizing a novel model-based Monte Carlo sampling method to overcome the problem of hidden information. The paper also presents a learning framework that uses the above decision-making algorithm for co-training of partners. The agents refine their selection strategies during training and continuously exchange their refined strategies. The refinement is based on inductive learning applied to examples accumulated for classes of states with conflicting actions. The algorithm was empirically evaluated on a set of bridge deals. The pair of agents that co-trained significantly improved their bidding performance to a level surpassing that of the current state-of-the-art bidding algorithm.


Keywords: Opponent Modeling, Games, Learning in Games, Multi-Agent Systems, Bridge
Secondary Keywords:
Online version:
Bibtex entry:
 @article{Amit:2006:LBB,
  Author = {Asaf Amit and Shaul Markovitch},
  Title = {Learning to Bid in Bridge},
  Year = {2006},
  Journal = {Machine Learning},
  Volume = {63},
  Number = {3},
  Month = {},
  Pages = {287--327},
  Url = {http://www.cs.technion.ac.il/~shaulm/papers/pdf/Amit-Markovitch-mlj2005.pdf},
  Keywords = {Opponent Modeling, Games, Learning in Games, Multi-Agent Systems, Bridge},
  Secondary-keywords = {Adversary Search},
  Abstract = {
    Bridge bidding is considered to be one of the most difficult
    problems for game-playing programs. It involves four agents rather
    than two, including a cooperative agent. In addition, the partial
    observability of the game makes it impossible to predict the
    outcome of each action. In this paper we present a new decision-
    making algorithm that is capable of overcoming these problems. The
    algorithm allows models to be used for both opponent agents and
    partners, while utilizing a novel model-based Monte Carlo sampling
    method to overcome the problem of hidden information. The paper
    also presents a learning framework that uses the above decision-
    making algorithm for co-training of partners. The agents refine
    their selection strategies during training and continuously
    exchange their refined strategies. The refinement is based on
    inductive learning applied to examples accumulated for classes of
    states with conflicting actions. The algorithm was empirically
    evaluated on a set of bridge deals. The pair of agents that co-
    trained significantly improved their bidding performance to a
    level surpassing that of the current state-of-the-art bidding
    algorithm.
  }

  }