TUTCRIS - Tampereen teknillinen yliopisto


On the stability of the discrete time filter and the uniform convergence of its approximations



KustantajaTampere University of Technology
ISBN (elektroninen)978-952-15-1923-9
ISBN (painettu)978-952-15-1892-8
TilaJulkaistu - 14 joulukuuta 2007
OKM-julkaisutyyppiG4 Monografiaväitöskirja


NimiTampere University of Technology. Publication
KustantajaTampere University of Technology
ISSN (painettu)1459-2045


The stochastic discrete time filter also known as the Bayesian or optimal filter has a wide range of applications in modern technology. In general, the filter recursion is intractable and therefore in practice, it has to be approximated by some numerical method. Typically, the accuracy of the approximation can be increased only by allowing the evaluation of the approximation to become computationally more expensive. Moreover, in some applications the filter is run for an indefinite time. In such a case it is desired that with a fixed computational cost the error of the approximation is guaranteed to remain below some level and that this level can be made as small as desired by letting the computational cost of the approximating algorithm increase. If this can be done, then the approximation is said to be uniformly convergent. It has been pointed out by several authors that the ability to approximate the filter in a uniformly convergent manner is closely related to the stability of the exact filter with respect to its initial conditions. This relation is manifested in the literature by results stating that under some assumptions a stable filter admits uniformly convergent approximations. This thesis addresses both of the above mentioned problems, the stability of the discrete time filter with respect to its initial conditions and the uniform convergence of certain filter approximations. The main result regarding the stability establishes easily verifiable sufficient conditions for the filter stability. The stability in this case means that the total variation distance between two filters with different initial distributions converges to zero almost surely. Also rates for the convergence are provided by the analysis. Similarly, the main result regarding the uniform convergence establishes sufficient conditions for the uniform convergence of certain filter approximation algorithms. The uniform convergence is proved in the mean sense, not almost surely. Although the stability of the filter is not shown to be one of the sufficient conditions for the uniform convergence, it is shown that similar conditions are sufficient for both, the stability and the uniform convergence. Perhaps the most important conclusion of the stability theorem is that under some assumptions the filter is shown to be stable provided that the tails of the observation noise distributions are sufficiently light compared to the tails of the signal noise distributions. In particular, the signal is not required to be ergodic or mixing and the state space is not required to be compact. Moreover, it is not assumed in general that the observation noise enters the filter framework with a sufficiently small coefficient. This is assumed only if the observation noise and the signal noise distributions have equally light tails. The uniform convergence is proved for a general class of filter approximation methods and in particular, it is shown that the conditions are satisfied by a certain auxiliary particle filter type algorithm and by a certain sampling/importance resampling filter type sequential Monte Carlo algorithm. These algorithms are assumed to employ either the multinomial resampling scheme or the tree based branching algorithm. The uniform convergence is obtained with respect to the sample size but because the computational cost of these filter approximation algorithms is determined by the sample size, the convergence is also uniform with respect to the computational cost. Moreover, the uniform convergence results are illustrated by some computer simulations.


Latausten tilastot

Ei tietoja saatavilla