Algorithms for Positioning with Nonlinear Measurement Models and Heavy-tailed and Asymmetric Distributed Additive Noise
Research output: Book/Report › Doctoral thesis › Collection of Articles
|Publisher||Tampere University of Technology|
|Number of pages||82|
|Publication status||Published - 26 Aug 2016|
|Publication type||G5 Doctoral dissertation (article)|
|Name||Tampere University of Technology. Publication|
A positioning problem with a nonlinear measurement function is often solved by a nonlinear least squares (NLS) method or, when ﬁltering is desired, by an extended Kalman ﬁlter (EKF). However, these methods are unable to capture multiple peaks of the likelihood function and do not address heavy-tailedness or skewness. Approximating the likelihood by a Gaussian mixture (GM) and using a GM ﬁlter (GMF) solves the problem. The drawback is that the approximation requires a large number of components in the GM for a precise approximation, which makes it unsuitable for real-time positioning on small mobile devices.
This thesis studies a generalised version of Gaussian mixtures, which is called GGM, to capture multiple peaks. It relaxes the GM’s restriction to non-negative component weights. The analysis shows that the GGM allows a signiﬁcant reduction of the number of required Gaussian components when applied for approximating the measurement likelihood of a transmitter with an isotropic antenna, compared with the GM. Therefore, the GGM facilitates real-time positioning in small mobile devices. In tests for a cellular telephone network and for an ultra-wideband network the GGM and its ﬁlter provide signiﬁcantly better positioning accuracy than the NLS and the EKF.
For positioning with nonlinear measurement models, and heavytailed and skewed distributed measurement errors, an Expectation Maximisation (EM) algorithm is studied. The EM algorithm is compared with a standard NLS algorithm in simulations and tests with realistic emulated data from a long term evolution network. The EM algorithm is more robust to measurement outliers. If the errors in training and positioning data are similar distributed, then the EM algorithm yields signiﬁcantly better position estimates than the NLS method. The improvement in accuracy and precision comes at the cost of moderately higher computational demand and higher vulnerability to changing patterns in the error distribution (of training and positioning data). This vulnerability is caused by the fact that the skew-t distribution (used in EM) has 4 parameters while the normal distribution (used in NLS) has only 2. Hence the skew-t yields a closer ﬁt than the normal distribution of the pattern in the training data. However, on the downside if patterns in training and positioning data vary than the skew-t ﬁt is not necessarily a better ﬁt than the normal ﬁt, which weakens the EM algorithm’s positioning accuracy and precision. This concept of reduced generalisability due to overﬁtting is a basic rule of machine learning.
This thesis additionally shows how parameters of heavy-tailed and skewed error distributions can be ﬁtted to training data. It furthermore gives an overview on other parametric methods for solving the positioning method, how training data is handled and summarised for them, how positioning is done by them, and how they compare with nonparametric methods. These methods are analysed by extensive tests in a wireless area network, which shows the strength and weaknesses of each method.