Home | Publications | CS Home

Exploration and Adaptation in Multiagent Systems: A Model-Based Approach


David Carmel and Shaul Markovitch. Exploration and Adaptation in Multiagent Systems: A Model-Based Approach. In Proceedings of The Fifteenth International Joint Conference for Artificial Intelligence, 606-611 Nagoya, Japan, 1997.


Abstract

Agents that operate in a multi-agent system can benefit significantly from adapting to other agents while interacting with them. This work presents a general architecture for a model-based learning strategy combined with an exploration strategy. This combination enables adaptive agents to learn models of their rivals and to explore their behavior for exploitation in future encounters. We report experimental results in the {\em Iterated Prisoner's Dilemma} domain, demonstrating the superiority of the model-based learning agent over non-adaptive agents and over reinforcement-learning agents. The Experimental results also show that exploration can improve the performance of a model-based agent significantly.


Keywords: Opponent Modeling, Exploration, Active Learning, Learning in Games, Multi-Agent Systems, Repeated Games, Games
Secondary Keywords:
Online version:
Bibtex entry:
 @inproceedings{Carmel:1997:EAM,
  Author = {David Carmel and Shaul Markovitch},
  Title = {Exploration and Adaptation in Multiagent Systems: A Model-Based Approach},
  Year = {1997},
  Booktitle = {Proceedings of The Fifteenth International Joint Conference for Artificial Intelligence},
  Pages = {606--611},
  Address = {Nagoya, Japan},
  Url = {http://www.cs.technion.ac.il/~shaulm/papers/pdf/Carmel-Markovitch-ijcai97.pdf},
  Keywords = {Opponent Modeling, Exploration, Active Learning, Learning in Games, Multi-Agent Systems, Repeated Games, Games},
  Secondary-keywords = {Lookahead, Exploration vs. Exploitation, Learning DFA},
  Abstract = {
    Agents that operate in a multi-agent system can benefit
    significantly from adapting to other agents while interacting with
    them. This work presents a general architecture for a model-based
    learning strategy combined with an exploration strategy. This
    combination enables adaptive agents to learn models of their
    rivals and to explore their behavior for exploitation in future
    encounters. We report experimental results in the {\em Iterated
    Prisoner's Dilemma} domain, demonstrating the superiority of the
    model-based learning agent over non-adaptive agents and over
    reinforcement-learning agents. The Experimental results also show
    that exploration can improve the performance of a model-based
    agent significantly.
  }

  }