TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Self-localization of dynamic user-worn microphones from observed speech

Tutkimustuotosvertaisarvioitu

Yksityiskohdat

AlkuperäiskieliEnglanti
Sivut76-85
Sivumäärä10
JulkaisuApplied Acoustics
Vuosikerta117
NumeroPart A
DOI - pysyväislinkit
TilaJulkaistu - 9 marraskuuta 2016
OKM-julkaisutyyppiA1 Alkuperäisartikkeli

Tiivistelmä

Abstract The increase of mobile devices and most recently wearables has raised the interest to utilize their sensors for various applications such as indoor localization. We present the first acoustic self-localization scheme that is passive, and is capable of operating when sensors are moving, and possibly unsynchronized. As a result, the relative microphone positions are obtained and therefore an ad hoc microphone array has been established. The proposed system takes advantage of the knowledge that a device is worn by its user e.g. attached to his/her clothing. A user here acts as a sound source and the sensor is the user-worn microphone. Such an entity is referred to as a node. Node-related spatial information is obtained from Time Difference of Arrival (TDOA) estimated from audio captured by the nodes. Kalman filtering is used for node tracking and prediction of spatial information during periods of node silence. Finally, the node positions are recovered using multidimensional scaling (MDS). The only information required by the proposed system is observations of sounds produced by the nodes such as speech to localize the moving nodes. The general framework for acoustic self-localization is presented followed by an implementation to demonstrate the concept. Real data collected by off-the-shelf equipment is used to evaluate the positioning accuracy of nodes in contrast to image based method. The presented system achieves an accuracy of approximately 10 cm in an acoustic laboratory.

Tutkimusalat

Julkaisufoorumi-taso