Home | Publications | CS Home

Learning Models of Intelligent Agents


David Carmel and Shaul Markovitch. Learning Models of Intelligent Agents. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, 62-67 Portland, Oregon, 1996.


Abstract

Agents that operate in a multi-agent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interactive strategy is a hard problem because it depends mostly on the behavior of the others. In this work, interaction among agents is represented as a repeated two-player game, where the agents' objective is to look for a strategy that maximizes their expected sum of rewards in the game. We assume that agents' strategies can be modeled as finite automata. A model-based approach is presented as a possible method for learning an effective interactive strategy. First, we describe how an agent should find an optimal strategy against a given model. Second, we present an unsupervised algorithm that infers a model of the opponent's automaton from its input/output behavior. A set of experiments that show the potential merit of the algorithm is reported as well.


Keywords: Opponent Modeling, Repeated Games, Multi-Agent Systems, Learning DFA, Games
Secondary Keywords:
Online version:
Bibtex entry:
 @inproceedings{Carmel:1996:LMIa,
  Author = {David Carmel and Shaul Markovitch},
  Title = {Learning Models of Intelligent Agents},
  Year = {1996},
  Booktitle = {Proceedings of the Thirteenth National Conference on Artificial Intelligence},
  Pages = {62--67},
  Address = {Portland, Oregon},
  Url = {http://www.cs.technion.ac.il/~shaulm/papers/pdf/Carmel-Markovitch-OMM-aaai1996.pdf},
  Keywords = {Opponent Modeling, Repeated Games, Multi-Agent Systems, Learning DFA, Games},
  Abstract = {
    Agents that operate in a multi-agent system need an efficient
    strategy to handle their encounters with other agents involved.
    Searching for an optimal interactive strategy is a hard problem
    because it depends mostly on the behavior of the others. In this
    work, interaction among agents is represented as a repeated two-
    player game, where the agents' objective is to look for a strategy
    that maximizes their expected sum of rewards in the game. We
    assume that agents' strategies can be modeled as finite automata.
    A model-based approach is presented as a possible method for
    learning an effective interactive strategy. First, we describe how
    an agent should find an optimal strategy against a given model.
    Second, we present an unsupervised algorithm that infers a model
    of the opponent's automaton from its input/output behavior. A set
    of experiments that show the potential merit of the algorithm is
    reported as well.
  }

  }