Home | Publications | CS Home

Online Speedup Learning for Optimal Planning


Carmel Domshlak, Erez Karpas and Shaul Markovitch. Online Speedup Learning for Optimal Planning. Journal of Artificial Intelligence Research, 44:709-755 2012.


Abstract

Domain-independent planning is one of the foundational areas in the field of Artificial Intelligence. A description of a planning task consists of an initial world state, a goal, and a set of actions for modifying the world state. The objective is to find a sequence of actions, that is, a plan, that transforms the initial world state into a goal state. In optimal planning, we are interested in finding not just a plan, but one of the cheapest plans. A prominent approach to optimal planning these days is heuristic state-space search, guided by admissible heuristic functions. Numerous admissible heuristics have been developed, each with its own strengths and weaknesses, and it is well known that there is no single "best'' heuristic for optimal planning in general. Thus, which heuristic to choose for a given planning task is a difficult question. This difficulty can be avoided by combining several heuristics, but that requires computing numerous heuristic estimates at each state, and the tradeoff between the time spent doing so and the time saved by the combined advantages of the different heuristics might be high. We present a novel method that reduces the cost of combining admissible heuristics for optimal planning, while maintaining its benefits. Using an idealized search space model, we formulate a decision rule for choosing the best heuristic to compute at each state. We then present an active online learning approach for learning a classifier with that decision rule as the target concept, and employ the learned classifier to decide which heuristic to compute at each state. We evaluate this technique empirically, and show that it substantially outperforms the standard method for combining several heuristics via their pointwise maximum.


Keywords: Planning, Speedup Learning
Secondary Keywords:
Online version:
Bibtex entry:
 @article{Karpas:2012:OSL,
  Author = {Carmel Domshlak and Erez Karpas and Shaul Markovitch},
  Title = {Online Speedup Learning for Optimal Planning},
  Year = {2012},
  Journal = {Journal of Artificial Intelligence Research},
  Volume = {44},
  Pages = {709--755},
  Url = {http://www.cs.technion.ac.il/~shaulm/papers/pdf/Domshlak-Karpas-Markovitch-JAIR2012.pdf},
  Keywords = {Planning, Speedup Learning},
  Abstract = {
    Domain-independent planning is one of the foundational areas in
    the field of Artificial Intelligence. A description of a planning
    task consists of an initial world state, a goal, and a set of
    actions for modifying the world state. The objective is to find a
    sequence of actions, that is, a plan, that transforms the initial
    world state into a goal state. In optimal planning, we are
    interested in finding not just a plan, but one of the cheapest
    plans. A prominent approach to optimal planning these days is
    heuristic state-space search, guided by admissible heuristic
    functions. Numerous admissible heuristics have been developed,
    each with its own strengths and weaknesses, and it is well known
    that there is no single "best'' heuristic for optimal planning in
    general. Thus, which heuristic to choose for a given planning task
    is a difficult question. This difficulty can be avoided by
    combining several heuristics, but that requires computing numerous
    heuristic estimates at each state, and the tradeoff between the
    time spent doing so and the time saved by the combined advantages
    of the different heuristics might be high. We present a novel
    method that reduces the cost of combining admissible heuristics
    for optimal planning, while maintaining its benefits. Using an
    idealized search space model, we formulate a decision rule for
    choosing the best heuristic to compute at each state. We then
    present an active online learning approach for learning a
    classifier with that decision rule as the target concept, and
    employ the learned classifier to decide which heuristic to compute
    at each state. We evaluate this technique empirically, and show
    that it substantially outperforms the standard method for
    combining several heuristics via their pointwise maximum.
  }

  }