Approximate Bayesian inference methods for stochastic state space models
|Kustantaja||Tampere University of Technology|
|Tila||Julkaistu - 23 helmikuuta 2018|
|Nimi||Tampere University of Technology. Publication|
Bayesian inference for non-linear state space models generally requires use of approximations, since the exact posterior distribution is readily available only for a few special cases. The approximation methods can be roughly classiﬁed into to groups: deterministic methods, where the intractable posterior distribution is approximated from a family of more tractable distributions (e.g. Gaussian and VB approximations), and stochastic sampling based methods (e.g. particle ﬁlters). Gaussian approximation refers to directly approximating the posterior with a Gaussian distribution, and can be readily applied for models with Gaussian process and measurement noise. Well known examples are the extended Kalman ﬁlter and sigma-point based unscented Kalman ﬁlter. The VB method is based on minimizing the Kullback-Leibler divergence of the true posterior with respect to the approximate distribution, chosen from a family of more tractable simpler distributions.
The ﬁrst main contribution of the thesis is the development of a VB approximation for linear regression problems with outlier robust measurement distributions. A broad family of outlier robust distributions can be presented as an inﬁnite mixture of Gaussians, called Gaussian scale mixture models, and include e.g. the t-distribution, the Laplace distribution and the contaminated normal distribution. The VB approximation for the regression problem can be readily extended to the estimation of state space models and is presented in the introductory part.
VB approximations can be also used for approximate inference in continuous-discrete Gaussian models, where the dynamics are modeled with stochastic differential equations and measurements are obtained at discrete time instants. The second main contribution is the presentation of a VB approximation for these models and the explanation of how the resulting algorithm connects to the Gaussian ﬁltering and smoothing framework.
The third contribution of the thesis is the development of parameter estimation using particle Markov Chain Monte Carlo (PMCMC) method and twisted particle ﬁlters. Twisted particle ﬁlters are obtained from standard particle ﬁlters by applying a special weighting to the sampling law of the ﬁlter. The weighting is chosen to minimize the variance of the marginal likelihood estimate, and the resulting particle ﬁlter is more efficient than conventional PMCMC algorithms. The exact optimal weighting is generally not available, but can be approximated using the Gaussian ﬁltering and smoothing framework.