TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement

Tutkimustuotosvertaisarvioitu

Standard

Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement. / Lauri, Mikko; Pajarinen, Joni; Peters, Jan.

julkaisussa: AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, Vuosikerta 34, Nro 2, 42, 01.10.2020.

Tutkimustuotosvertaisarvioitu

Harvard

Lauri, M, Pajarinen, J & Peters, J 2020, 'Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement', AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, Vuosikerta. 34, Nro 2, 42. https://doi.org/10.1007/s10458-020-09467-6

APA

Vancouver

Author

Lauri, Mikko ; Pajarinen, Joni ; Peters, Jan. / Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement. Julkaisussa: AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS. 2020 ; Vuosikerta 34, Nro 2.

Bibtex - Lataa

@article{2138be0d91314005a2e8a9d57fd24b53,
title = "Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement",
abstract = "Decentralized policies for information gathering are required when multiple autonomous agents are deployed to collect data about a phenomenon of interest when constant communication cannot be assumed. This is common in tasks involving information gathering with multiple independently operating sensor devices that may operate over large physical distances, such as unmanned aerial vehicles, or in communication limited environments such as in the case of autonomous underwater vehicles. In this paper, we frame the information gathering task as a general decentralized partially observable Markov decision process (Dec-POMDP). The Dec-POMDP is a principled model for co-operative decentralized multi-agent decision-making. An optimal solution of a Dec-POMDP is a set of local policies, one for each agent, which maximizes the expected sum of rewards over time. In contrast to most prior work on Dec-POMDPs, we set the reward as a non-linear function of the agents’ state information, for example the negative Shannon entropy. We argue that such reward functions are well-suited for decentralized information gathering problems. We prove that if the reward function is convex, then the finite-horizon value function of the Dec-POMDP is also convex. We propose the first heuristic anytime algorithm for information gathering Dec-POMDPs, and empirically prove its effectiveness by solving discrete problems an order of magnitude larger than previous state-of-the-art. We also propose an extension to continuous-state problems with finite action and observation spaces by employing particle filtering. The effectiveness of the proposed algorithms is verified in domains such as decentralized target tracking, scientific survey planning, and signal source localization.",
keywords = "Active perception, Decentralized POMDP, Information gathering, Planning under uncertainty",
author = "Mikko Lauri and Joni Pajarinen and Jan Peters",
year = "2020",
month = "10",
day = "1",
doi = "10.1007/s10458-020-09467-6",
language = "English",
volume = "34",
journal = "AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS",
issn = "1387-2532",
publisher = "Springer Verlag",
number = "2",

}

RIS (suitable for import to EndNote) - Lataa

TY - JOUR

T1 - Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement

AU - Lauri, Mikko

AU - Pajarinen, Joni

AU - Peters, Jan

PY - 2020/10/1

Y1 - 2020/10/1

N2 - Decentralized policies for information gathering are required when multiple autonomous agents are deployed to collect data about a phenomenon of interest when constant communication cannot be assumed. This is common in tasks involving information gathering with multiple independently operating sensor devices that may operate over large physical distances, such as unmanned aerial vehicles, or in communication limited environments such as in the case of autonomous underwater vehicles. In this paper, we frame the information gathering task as a general decentralized partially observable Markov decision process (Dec-POMDP). The Dec-POMDP is a principled model for co-operative decentralized multi-agent decision-making. An optimal solution of a Dec-POMDP is a set of local policies, one for each agent, which maximizes the expected sum of rewards over time. In contrast to most prior work on Dec-POMDPs, we set the reward as a non-linear function of the agents’ state information, for example the negative Shannon entropy. We argue that such reward functions are well-suited for decentralized information gathering problems. We prove that if the reward function is convex, then the finite-horizon value function of the Dec-POMDP is also convex. We propose the first heuristic anytime algorithm for information gathering Dec-POMDPs, and empirically prove its effectiveness by solving discrete problems an order of magnitude larger than previous state-of-the-art. We also propose an extension to continuous-state problems with finite action and observation spaces by employing particle filtering. The effectiveness of the proposed algorithms is verified in domains such as decentralized target tracking, scientific survey planning, and signal source localization.

AB - Decentralized policies for information gathering are required when multiple autonomous agents are deployed to collect data about a phenomenon of interest when constant communication cannot be assumed. This is common in tasks involving information gathering with multiple independently operating sensor devices that may operate over large physical distances, such as unmanned aerial vehicles, or in communication limited environments such as in the case of autonomous underwater vehicles. In this paper, we frame the information gathering task as a general decentralized partially observable Markov decision process (Dec-POMDP). The Dec-POMDP is a principled model for co-operative decentralized multi-agent decision-making. An optimal solution of a Dec-POMDP is a set of local policies, one for each agent, which maximizes the expected sum of rewards over time. In contrast to most prior work on Dec-POMDPs, we set the reward as a non-linear function of the agents’ state information, for example the negative Shannon entropy. We argue that such reward functions are well-suited for decentralized information gathering problems. We prove that if the reward function is convex, then the finite-horizon value function of the Dec-POMDP is also convex. We propose the first heuristic anytime algorithm for information gathering Dec-POMDPs, and empirically prove its effectiveness by solving discrete problems an order of magnitude larger than previous state-of-the-art. We also propose an extension to continuous-state problems with finite action and observation spaces by employing particle filtering. The effectiveness of the proposed algorithms is verified in domains such as decentralized target tracking, scientific survey planning, and signal source localization.

KW - Active perception

KW - Decentralized POMDP

KW - Information gathering

KW - Planning under uncertainty

U2 - 10.1007/s10458-020-09467-6

DO - 10.1007/s10458-020-09467-6

M3 - Article

VL - 34

JO - AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS

JF - AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS

SN - 1387-2532

IS - 2

M1 - 42

ER -