Tampere University of Technology

TUTCRIS Research Portal

Multi-sensor next-best-view planning as matroid-constrained submodular maximization

Research output: Contribution to journalArticleScientificpeer-review

Standard

Multi-sensor next-best-view planning as matroid-constrained submodular maximization. / Lauri, Mikko; Pajarinen, Joni; Peters, Jan; Frintrop, Simone.

In: IEEE Robotics and Automation Letters, Vol. 5, No. 4, 2020, p. 5323-5330.

Research output: Contribution to journalArticleScientificpeer-review

Harvard

Lauri, M, Pajarinen, J, Peters, J & Frintrop, S 2020, 'Multi-sensor next-best-view planning as matroid-constrained submodular maximization', IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5323-5330. https://doi.org/10.1109/LRA.2020.3007445

APA

Lauri, M., Pajarinen, J., Peters, J., & Frintrop, S. (2020). Multi-sensor next-best-view planning as matroid-constrained submodular maximization. IEEE Robotics and Automation Letters, 5(4), 5323-5330. https://doi.org/10.1109/LRA.2020.3007445

Vancouver

Lauri M, Pajarinen J, Peters J, Frintrop S. Multi-sensor next-best-view planning as matroid-constrained submodular maximization. IEEE Robotics and Automation Letters. 2020;5(4):5323-5330. https://doi.org/10.1109/LRA.2020.3007445

Author

Lauri, Mikko ; Pajarinen, Joni ; Peters, Jan ; Frintrop, Simone. / Multi-sensor next-best-view planning as matroid-constrained submodular maximization. In: IEEE Robotics and Automation Letters. 2020 ; Vol. 5, No. 4. pp. 5323-5330.

Bibtex - Download

@article{ea98323dfd1b407aa38b5dc17e110e19,
title = "Multi-sensor next-best-view planning as matroid-constrained submodular maximization",
abstract = "3D scene models are useful in robotics for tasks such as path planning, object manipulation, and structural inspection. We consider the problem of creating a 3D model using depth images captured by a team of multiple robots. Each robot selects a viewpoint and captures a depth image from it, and the images are fused to update the scene model. The process is repeated until a scene model of desired quality is obtained. Next-best-view planning uses the current scene model to select the next viewpoints. The objective is to select viewpoints so that the images captured using them improve the quality of the scene model the most. In this letter, we address next-best-view planning for multiple depth cameras. We propose a utility function that scores sets of viewpoints and avoids overlap between multiple sensors. We show that multi-sensor next-best-view planning with this utility function is an instance of submodular maximization under a matroid constraint. This allows the planning problem to be solved by a polynomial-Time greedy algorithm that yields a solution within a constant factor from the optimal. We evaluate the performance of our planning algorithm in simulated experiments with up to 8 sensors, and in real-world experiments using two robot arms equipped with depth cameras.",
keywords = "multi-robot systems, Reactive and sensor-based planning, RGB-D perception",
author = "Mikko Lauri and Joni Pajarinen and Jan Peters and Simone Frintrop",
note = "EXT={"}Lauri, Mikko{"}",
year = "2020",
doi = "10.1109/LRA.2020.3007445",
language = "English",
volume = "5",
pages = "5323--5330",
journal = "IEEE Robotics and Automation Letters",
issn = "2377-3766",
publisher = "IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC",
number = "4",

}

RIS (suitable for import to EndNote) - Download

TY - JOUR

T1 - Multi-sensor next-best-view planning as matroid-constrained submodular maximization

AU - Lauri, Mikko

AU - Pajarinen, Joni

AU - Peters, Jan

AU - Frintrop, Simone

N1 - EXT="Lauri, Mikko"

PY - 2020

Y1 - 2020

N2 - 3D scene models are useful in robotics for tasks such as path planning, object manipulation, and structural inspection. We consider the problem of creating a 3D model using depth images captured by a team of multiple robots. Each robot selects a viewpoint and captures a depth image from it, and the images are fused to update the scene model. The process is repeated until a scene model of desired quality is obtained. Next-best-view planning uses the current scene model to select the next viewpoints. The objective is to select viewpoints so that the images captured using them improve the quality of the scene model the most. In this letter, we address next-best-view planning for multiple depth cameras. We propose a utility function that scores sets of viewpoints and avoids overlap between multiple sensors. We show that multi-sensor next-best-view planning with this utility function is an instance of submodular maximization under a matroid constraint. This allows the planning problem to be solved by a polynomial-Time greedy algorithm that yields a solution within a constant factor from the optimal. We evaluate the performance of our planning algorithm in simulated experiments with up to 8 sensors, and in real-world experiments using two robot arms equipped with depth cameras.

AB - 3D scene models are useful in robotics for tasks such as path planning, object manipulation, and structural inspection. We consider the problem of creating a 3D model using depth images captured by a team of multiple robots. Each robot selects a viewpoint and captures a depth image from it, and the images are fused to update the scene model. The process is repeated until a scene model of desired quality is obtained. Next-best-view planning uses the current scene model to select the next viewpoints. The objective is to select viewpoints so that the images captured using them improve the quality of the scene model the most. In this letter, we address next-best-view planning for multiple depth cameras. We propose a utility function that scores sets of viewpoints and avoids overlap between multiple sensors. We show that multi-sensor next-best-view planning with this utility function is an instance of submodular maximization under a matroid constraint. This allows the planning problem to be solved by a polynomial-Time greedy algorithm that yields a solution within a constant factor from the optimal. We evaluate the performance of our planning algorithm in simulated experiments with up to 8 sensors, and in real-world experiments using two robot arms equipped with depth cameras.

KW - multi-robot systems

KW - Reactive and sensor-based planning

KW - RGB-D perception

U2 - 10.1109/LRA.2020.3007445

DO - 10.1109/LRA.2020.3007445

M3 - Article

VL - 5

SP - 5323

EP - 5330

JO - IEEE Robotics and Automation Letters

JF - IEEE Robotics and Automation Letters

SN - 2377-3766

IS - 4

ER -