Shlomo Zilberstein (Université de Massachussetts, Amherst)

We present new planning algorithms for situations in which multiple decision makers try to achieve a common objective under uncertainty. Planning what actions to take in such settings is particularly hard when each decision maker has different partial information about the overall state of the domain. This problem arises in such areas as multi-robot coordination, load balancing, protocol design for broadcast channels, and sensor network management. The talk introduces a decision-theoretic framework for studying these problems called decentralized partially-observable Markov decision process (DEC-POMDP). Exact algorithm for solving DEC-POMDPs tend to run out of memory even for small toy problems. We introduce memory-bounded techniques that improve scalability and return near-optimal results. One class of algorithms is designed for finite-horizon problems, in which case agent plans are represented using policy trees. Another class of algorithms is designed for infinite-horizon problems, in which case agent plans are represented as stochastic finite-state controllers. We analyze the performance of these algorithms and describe current research efforts to further improve the applicability of the DEC-POMDP framework.