Multifidelity multilevel Monte Carlo to accelerate approximate Bayesian parameter inference for partially observed stochastic processes
Models of stochastic processes are widely used in almost all fields of science. Theory validation, parameter estimation, and prediction all require model calibration and statistical inference using data. However, data are almost always incomplete observations of reality. This leads to a great challe...
        Saved in:
      
    
          | Published in | Journal of computational physics Vol. 469; p. 111543 | 
|---|---|
| Main Authors | , , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        Cambridge
          Elsevier Inc
    
        15.11.2022
     Elsevier Science Ltd  | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 0021-9991 1090-2716 1090-2716  | 
| DOI | 10.1016/j.jcp.2022.111543 | 
Cover
| Summary: | Models of stochastic processes are widely used in almost all fields of science. Theory validation, parameter estimation, and prediction all require model calibration and statistical inference using data. However, data are almost always incomplete observations of reality. This leads to a great challenge for statistical inference because the likelihood function will be intractable for almost all partially observed stochastic processes. This renders many statistical methods, especially within a Bayesian framework, impossible to implement. Therefore, computationally expensive likelihood-free approaches are applied that replace likelihood evaluations with realisations of the model and observation process. For accurate inference, however, likelihood-free techniques may require millions of expensive stochastic simulations. To address this challenge, we develop a new method based on recent advances in multilevel and multifidelity methods for parameter inference using partially observed Markov processes. Our novel approach combines the multilevel Monte Carlo telescoping summation, applied to a sequence of approximate Bayesian posterior targets, with a multifidelity rejection sampler that learns from computationally inexpensive model approximations to minimise the number of computationally expensive exact simulations required for accurate inference. We present the derivation of our new algorithm for likelihood-free Bayesian inference, discuss practical implementation details, and demonstrate substantial performance improvements. Using examples from systems biology, we demonstrate improvements of more than two orders of magnitude over standard rejection sampling techniques. Our approach is generally applicable to accelerate other sampling schemes, such as sequential Monte Carlo, to enable feasible Bayesian analysis for realistic practical applications in physics, chemistry, biology, epidemiology, ecology and economics. We provide source code implementations of our methods and demonstrations (available at https://github.com/davidwarne/MLMCandMultifidelityForABC).
•Acceleration of approximate Bayesian computation by two orders of magnitude.•Method combines orthogonal techniques of multilevel and multifidelity methods.•Optimal algorithm tuning and configuration is explored.•Methods demonstrated numerically for stochastic models of biochemical reaction networks. | 
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14  | 
| ISSN: | 0021-9991 1090-2716 1090-2716  | 
| DOI: | 10.1016/j.jcp.2022.111543 |