Home | Publications | CS Home

Learning and Exploiting Relative Weaknesses of Opponent Agents


Shaul Markovitch and Ronit Reger. Learning and Exploiting Relative Weaknesses of Opponent Agents. Autonomous Agents and Multi-agent Systems, 10:103-130 2005.


Abstract

Agents in a competitive interaction can greatly benefit from adapting to a particular adversary, rather than using the same general strategy against all opponents. One method of such adaptation is Opponent Modeling, in which a model of an opponent is acquired and utilized as part of the agent's decision procedure in future interactions with this opponent. However, acquiring an accurate model of a complex opponent strategy may be computationally infeasible. In addition, if the learned model is not accurate, then using it to predict the opponent's actions may potentially harm the agent's strategy rather than improving it. We thus define the concept of opponent weakness, and present a method for learning a model of this simpler concept. We analyze examples of past behavior of an opponent in a particular domain, judging its actions using a trusted judge. We then infer a weakness model based on the opponent's actions relative to the domain state, and incorporate this model into our agent's decision procedure. We also make use of a similar self weakness model, allowing the agent to prefer states in which the opponent is weak and our agent strong; where we have a relative advantage over the opponent. Experimental results spanning two different test domains demonstrate the agents' improved performance when making use of the weakness models.


Keywords: Opponent Modeling, Games, Learning in Games, Multi-Agent Systems
Secondary Keywords:
Online version:
Bibtex entry:
 @article{Markovitch:2005:LER,
  Author = {Shaul Markovitch and Ronit Reger},
  Title = {Learning and Exploiting Relative Weaknesses of Opponent Agents},
  Year = {2005},
  Journal = {Autonomous Agents and Multi-agent Systems},
  Volume = {10},
  Number = {2},
  Month = {March},
  Pages = {103--130},
  Url = {http://www.cs.technion.ac.il/~shaulm/papers/pdf/Markovitch-Reger-aamas2005.pdf},
  Keywords = {Opponent Modeling, Games, Learning in Games, Multi-Agent Systems},
  Secondary-keywords = {Adversary Search, Decision Trees},
  Abstract = {
    Agents in a competitive interaction can greatly benefit from
    adapting to a particular adversary, rather than using the same
    general strategy against all opponents. One method of such
    adaptation is Opponent Modeling, in which a model of an opponent
    is acquired and utilized as part of the agent's decision procedure
    in future interactions with this opponent. However, acquiring an
    accurate model of a complex opponent strategy may be
    computationally infeasible. In addition, if the learned model is
    not accurate, then using it to predict the opponent's actions may
    potentially harm the agent's strategy rather than improving it. We
    thus define the concept of opponent weakness, and present a method
    for learning a model of this simpler concept. We analyze examples
    of past behavior of an opponent in a particular domain, judging
    its actions using a trusted judge. We then infer a weakness model
    based on the opponent's actions relative to the domain state, and
    incorporate this model into our agent's decision procedure. We
    also make use of a similar self weakness model, allowing the agent
    to prefer states in which the opponent is weak and our agent
    strong; where we have a relative advantage over the opponent.
    Experimental results spanning two different test domains
    demonstrate the agents' improved performance when making use of
    the weakness models.
  }

  }