Tampere University of Technology

TUTCRIS Research Portal

Multi-sensor next-best-view planning as matroid-constrained submodular maximization

Research output: Contribution to journalArticleScientificpeer-review

Details

Original languageEnglish
Pages (from-to)5323-5330
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume5
Issue number4
DOIs
Publication statusPublished - 2020
Publication typeA1 Journal article-refereed

Abstract

3D scene models are useful in robotics for tasks such as path planning, object manipulation, and structural inspection. We consider the problem of creating a 3D model using depth images captured by a team of multiple robots. Each robot selects a viewpoint and captures a depth image from it, and the images are fused to update the scene model. The process is repeated until a scene model of desired quality is obtained. Next-best-view planning uses the current scene model to select the next viewpoints. The objective is to select viewpoints so that the images captured using them improve the quality of the scene model the most. In this letter, we address next-best-view planning for multiple depth cameras. We propose a utility function that scores sets of viewpoints and avoids overlap between multiple sensors. We show that multi-sensor next-best-view planning with this utility function is an instance of submodular maximization under a matroid constraint. This allows the planning problem to be solved by a polynomial-Time greedy algorithm that yields a solution within a constant factor from the optimal. We evaluate the performance of our planning algorithm in simulated experiments with up to 8 sensors, and in real-world experiments using two robot arms equipped with depth cameras.