Efficient 3D visual perception for robotic rock breaking
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Standard
Efficient 3D visual perception for robotic rock breaking. / Niu, Longchuan; Chen, Ke; Jia, Kui; Mattila, Jouni.
2019 IEEE 15th International Conference on Automation Science and Engineering, CASE 2019. IEEE, 2019. p. 1124-1130 (IEEE International Conference on Automation Science and Engineering).Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Harvard
APA
Vancouver
Author
Bibtex - Download
}
RIS (suitable for import to EndNote) - Download
TY - GEN
T1 - Efficient 3D visual perception for robotic rock breaking
AU - Niu, Longchuan
AU - Chen, Ke
AU - Jia, Kui
AU - Mattila, Jouni
N1 - EXT="Chen, Ke" jufoid=73680
PY - 2019/8/1
Y1 - 2019/8/1
N2 - In recent years, underground mining automation (e.g., the heavy-duty robots carrying rock breaker tools for secondary breaking) has drawn substantial interest. This breaking process is needed only when over-sized rocks threaten to jam the mine material flow. In the worst case, a pile of overlapped rocks can get stuck on top of a crusher's grate plate. For a human operator, it is relatively easy to make the decisions about the rock locations in the pile and the order of rocks to be crushed. In an autonomous operation, a robust and fast visual perception system is needed for executing robot motion commands. In this paper, we propose a pipeline for fast detection and pose estimation of individual rocks in cluttered scenes. We employ the state-of-art YOLOv3 as a 2D detector to perform 3D reconstruction from point cloud for detected rocks in 2D regions using our proposed novel method, and finally estimating the rock centroid positions and the normal-to-surface vectors based on the predicted point cloud. The detected centroids in the scene are ordered according to the depth of rock surface to the camera, which provides the breaking sequence of the rocks. During the system evaluation in the real rock breaking experiments, we have collected a new dataset with 4780 images having from 1 to 12 rocks on a grate plate. The proposed pipeline achieves 97.47% precision on overall detection with a real-time speed around 15Hz.
AB - In recent years, underground mining automation (e.g., the heavy-duty robots carrying rock breaker tools for secondary breaking) has drawn substantial interest. This breaking process is needed only when over-sized rocks threaten to jam the mine material flow. In the worst case, a pile of overlapped rocks can get stuck on top of a crusher's grate plate. For a human operator, it is relatively easy to make the decisions about the rock locations in the pile and the order of rocks to be crushed. In an autonomous operation, a robust and fast visual perception system is needed for executing robot motion commands. In this paper, we propose a pipeline for fast detection and pose estimation of individual rocks in cluttered scenes. We employ the state-of-art YOLOv3 as a 2D detector to perform 3D reconstruction from point cloud for detected rocks in 2D regions using our proposed novel method, and finally estimating the rock centroid positions and the normal-to-surface vectors based on the predicted point cloud. The detected centroids in the scene are ordered according to the depth of rock surface to the camera, which provides the breaking sequence of the rocks. During the system evaluation in the real rock breaking experiments, we have collected a new dataset with 4780 images having from 1 to 12 rocks on a grate plate. The proposed pipeline achieves 97.47% precision on overall detection with a real-time speed around 15Hz.
U2 - 10.1109/COASE.2019.8842859
DO - 10.1109/COASE.2019.8842859
M3 - Conference contribution
T3 - IEEE International Conference on Automation Science and Engineering
SP - 1124
EP - 1130
BT - 2019 IEEE 15th International Conference on Automation Science and Engineering, CASE 2019
PB - IEEE
ER -