The Multiple-Try Method and Local Optimization in Metropolis Sampling

This article describes a new Metropolis-like transition rule, the multiple-try Metropolis, for Markov chain Monte Carlo (MCMC) simulations. By using this transition rule together with adaptive direction sampling, we propose a novel method for incorporating local optimization steps into a MCMC sample...

Full description

Saved in:
Bibliographic Details
Published inJournal of the American Statistical Association Vol. 95; no. 449; pp. 121 - 134
Main Authors Liu, Jun S., Liang, Faming, Wong, Wing Hung
Format Journal Article
LanguageEnglish
Published Alexandria, VA Taylor & Francis Group 01.03.2000
American Statistical Association
Taylor & Francis Ltd
Subjects
Online AccessGet full text
ISSN0162-1459
1537-274X
DOI10.1080/01621459.2000.10473908

Cover

More Information
Summary:This article describes a new Metropolis-like transition rule, the multiple-try Metropolis, for Markov chain Monte Carlo (MCMC) simulations. By using this transition rule together with adaptive direction sampling, we propose a novel method for incorporating local optimization steps into a MCMC sampler in continuous state-space. Numerical studies show that the new method performs significantly better than the traditional Metropolis-Hastings (M-H) sampler. With minor tailoring in using the rule, the multiple-try method can also be exploited to achieve the effect of a griddy Gibbs sampler without having to bear with griddy approximations, and the effect of a hit-and-run algorithm without having to figure out the required conditional distribution in a random direction.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
content type line 23
ISSN:0162-1459
1537-274X
DOI:10.1080/01621459.2000.10473908