Home | Publications | CS Home

Opponent Modeling in Multi-agent Systems


David Carmel and Shaul Markovitch. Opponent Modeling in Multi-agent Systems. In Gerhard Weiss and Sandip Sen, Editors, Adaption And Learning In Multi-Agent Systems. Springer-Verlag, 1996.


Abstract

Agents that operate in a multi-agent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interactive strategy is a hard problem because it depends mostly on the behavior of the others. In this work, interaction among agents is represented as a repeated two-player game, where the agents' objective is to look for a strategy that maximizes their expected sum of rewards in the game. We assume that agents' strategies can be modeled as finite automata. A model-based approach is presented as a possible method for learning an effective interactive strategy. First, we describe how an agent should find an optimal strategy against a given model. Second, we present a heuristic algorithm that infers a model of the opponent's automaton from its input/output behavior. A set of experiments that show the potential merit of the algorithm is reported as well.


Keywords: Opponent Modeling, Multi-Agent Systems, Learning in Games, Games, Repeated Games, Learning DFA
Secondary Keywords:
Online version:
Bibtex entry:
 @incollection{Carmel:1996:OMM,
  Author = {David Carmel and Shaul Markovitch},
  Title = {Opponent Modeling in Multi-agent Systems},
  Year = {1996},
  Booktitle = {Adaption And Learning In Multi-Agent Systems},
  Volume = {1042},
  Editor = {Gerhard Weiss and Sandip Sen},
  Url = {http://www.cs.technion.ac.il/~shaulm/papers/pdf/Carmel-Markovitch-lnai1996.pdf},
  Keywords = {Opponent Modeling, Multi-Agent Systems, Learning in Games, Games, Repeated Games, Learning DFA},
  Abstract = {
    Agents that operate in a multi-agent system need an efficient
    strategy to handle their encounters with other agents involved.
    Searching for an optimal interactive strategy is a hard problem
    because it depends mostly on the behavior of the others. In this
    work, interaction among agents is represented as a repeated two-
    player game, where the agents' objective is to look for a strategy
    that maximizes their expected sum of rewards in the game. We
    assume that agents' strategies can be modeled as finite automata.
    A model-based approach is presented as a possible method for
    learning an effective interactive strategy. First, we describe how
    an agent should find an optimal strategy against a given model.
    Second, we present a heuristic algorithm that infers a model of
    the opponent's automaton from its input/output behavior. A set of
    experiments that show the potential merit of the algorithm is
    reported as well.
  }

  }