## Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement

Tutkimustuotos › › vertaisarvioitu

### Standard

**Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement.** / Lauri, Mikko; Pajarinen, Joni; Peters, Jan.

Tutkimustuotos › › vertaisarvioitu

### Harvard

*AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS*, Vuosikerta. 34, Nro 2, 42. https://doi.org/10.1007/s10458-020-09467-6

### APA

*AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS*,

*34*(2), [42]. https://doi.org/10.1007/s10458-020-09467-6

### Vancouver

### Author

### Bibtex - Lataa

}

### RIS (suitable for import to EndNote) - Lataa

TY - JOUR

T1 - Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement

AU - Lauri, Mikko

AU - Pajarinen, Joni

AU - Peters, Jan

PY - 2020/10/1

Y1 - 2020/10/1

N2 - Decentralized policies for information gathering are required when multiple autonomous agents are deployed to collect data about a phenomenon of interest when constant communication cannot be assumed. This is common in tasks involving information gathering with multiple independently operating sensor devices that may operate over large physical distances, such as unmanned aerial vehicles, or in communication limited environments such as in the case of autonomous underwater vehicles. In this paper, we frame the information gathering task as a general decentralized partially observable Markov decision process (Dec-POMDP). The Dec-POMDP is a principled model for co-operative decentralized multi-agent decision-making. An optimal solution of a Dec-POMDP is a set of local policies, one for each agent, which maximizes the expected sum of rewards over time. In contrast to most prior work on Dec-POMDPs, we set the reward as a non-linear function of the agents’ state information, for example the negative Shannon entropy. We argue that such reward functions are well-suited for decentralized information gathering problems. We prove that if the reward function is convex, then the finite-horizon value function of the Dec-POMDP is also convex. We propose the first heuristic anytime algorithm for information gathering Dec-POMDPs, and empirically prove its effectiveness by solving discrete problems an order of magnitude larger than previous state-of-the-art. We also propose an extension to continuous-state problems with finite action and observation spaces by employing particle filtering. The effectiveness of the proposed algorithms is verified in domains such as decentralized target tracking, scientific survey planning, and signal source localization.

AB - Decentralized policies for information gathering are required when multiple autonomous agents are deployed to collect data about a phenomenon of interest when constant communication cannot be assumed. This is common in tasks involving information gathering with multiple independently operating sensor devices that may operate over large physical distances, such as unmanned aerial vehicles, or in communication limited environments such as in the case of autonomous underwater vehicles. In this paper, we frame the information gathering task as a general decentralized partially observable Markov decision process (Dec-POMDP). The Dec-POMDP is a principled model for co-operative decentralized multi-agent decision-making. An optimal solution of a Dec-POMDP is a set of local policies, one for each agent, which maximizes the expected sum of rewards over time. In contrast to most prior work on Dec-POMDPs, we set the reward as a non-linear function of the agents’ state information, for example the negative Shannon entropy. We argue that such reward functions are well-suited for decentralized information gathering problems. We prove that if the reward function is convex, then the finite-horizon value function of the Dec-POMDP is also convex. We propose the first heuristic anytime algorithm for information gathering Dec-POMDPs, and empirically prove its effectiveness by solving discrete problems an order of magnitude larger than previous state-of-the-art. We also propose an extension to continuous-state problems with finite action and observation spaces by employing particle filtering. The effectiveness of the proposed algorithms is verified in domains such as decentralized target tracking, scientific survey planning, and signal source localization.

KW - Active perception

KW - Decentralized POMDP

KW - Information gathering

KW - Planning under uncertainty

U2 - 10.1007/s10458-020-09467-6

DO - 10.1007/s10458-020-09467-6

M3 - Article

VL - 34

JO - AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS

JF - AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS

SN - 1387-2532

IS - 2

M1 - 42

ER -