Weak convergence and optimal tuning of the reversible jump algorithm

The reversible jump algorithm is a useful Markov chain Monte Carlo method introduced by Green (1995) that allows switches between subspaces of differing dimensionality, and therefore, model selection. Although this method is now increasingly used in key areas (e.g. biology and finance), it remains a...

Full description

Saved in:
Bibliographic Details
Published inMathematics and computers in simulation Vol. 161; pp. 32 - 51
Main Authors Gagnon, Philippe, Bédard, Mylène, Desgagné, Alain
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.07.2019
Subjects
Online AccessGet full text
ISSN0378-4754
1872-7166
DOI10.1016/j.matcom.2018.06.007

Cover

More Information
Summary:The reversible jump algorithm is a useful Markov chain Monte Carlo method introduced by Green (1995) that allows switches between subspaces of differing dimensionality, and therefore, model selection. Although this method is now increasingly used in key areas (e.g. biology and finance), it remains a challenge to implement it. In this paper, we focus on a simple sampling context in order to obtain theoretical results that lead to an optimal tuning procedure for the considered reversible jump algorithm, and consequently, to easy implementation. The key result is the weak convergence of the sequence of stochastic processes engendered by the algorithm. It represents the main contribution of this paper as it is, to our knowledge, the first weak convergence result for the reversible jump algorithm. The sampler updating the parameters according to a random walk, this result allows to retrieve the well-known 0.234 rule for finding the optimal scaling. It also leads to an answer to the question: “with what probability should a parameter update be proposed comparatively to a model switch at each iteration?”
ISSN:0378-4754
1872-7166
DOI:10.1016/j.matcom.2018.06.007