Home | Publications | CS Home

Model-based Learning of Interaction Strategies in Multi-Agent Systems


David Carmel and Shaul Markovitch. Model-based Learning of Interaction Strategies in Multi-Agent Systems. Journal of Experimental and Theoretical Artificial Intelligence, 10:309-332 1998.


Abstract

Agents that operate in a multi-agent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interaction strategy is a hard problem because it depends mostly on the behavior of the others. One way to deal with this problem is to endow the agents with the ability to adapt their strategies based on their interaction experience. This work views interaction as a repeated game and presents a general architecture for a model-based agent that learns models of the rival agents for exploitation in future encounters. First, we describe a method for inferring an optimal strategy against a given model of another agent. Second, we present an unsupervised algorithm that infers a model of the opponent's strategy from its interaction behavior in the past. We then present a method for incorporating exploration strategies into model-based learning. We report experimental results demonstrating the superiority of the model-based learning agent over non-adaptive agents and over reinforcement-learning agents.


Keywords: Opponent Modeling, Games, Multi-Agent Systems, Learning in Games
Secondary Keywords:
Online version:
Bibtex entry:
 @article{Carmel:1998:MBL,
  Author = {David Carmel and Shaul Markovitch},
  Title = {Model-based Learning of Interaction Strategies in Multi-Agent Systems},
  Year = {1998},
  Journal = {Journal of Experimental and Theoretical Artificial Intelligence},
  Volume = {10},
  Number = {3},
  Pages = {309--332},
  Url = {http://www.cs.technion.ac.il/~shaulm/papers/pdf/Carmel-Markovitch-jetai1998.pdf},
  Keywords = {Opponent Modeling, Games, Multi-Agent Systems, Learning in Games},
  Secondary-keywords = {Repeated Games, Learning DFA},
  Abstract = {
    Agents that operate in a multi-agent system need an efficient
    strategy to handle their encounters with other agents involved.
    Searching for an optimal interaction strategy is a hard problem
    because it depends mostly on the behavior of the others. One way
    to deal with this problem is to endow the agents with the ability
    to adapt their strategies based on their interaction experience.
    This work views interaction as a repeated game and presents a
    general architecture for a model-based agent that learns models of
    the rival agents for exploitation in future encounters. First, we
    describe a method for inferring an optimal strategy against a
    given model of another agent. Second, we present an unsupervised
    algorithm that infers a model of the opponent's strategy from its
    interaction behavior in the past. We then present a method for
    incorporating exploration strategies into model-based learning. We
    report experimental results demonstrating the superiority of the
    model-based learning agent over non-adaptive agents and over
    reinforcement-learning agents.
  }

  }