Multiview depth map enhancement by variational bayes inference estimation of Dirichlet mixture models

High quality view synthesis is a prerequisite for future free-viewpoint television. It will enable viewers to move freely in a dynamic real world scene. Depth image based rendering algorithms will play a pivotal role when synthesizing an arbitrary number of novel views by using a subset of captured...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) pp. 1528 - 1532
Main Authors Rana, Pravin Kumar, Zhanyu Ma, Taghia, Jalil, Flierl, Markus
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.05.2013
Subjects
Online AccessGet full text
ISSN1520-6149
DOI10.1109/ICASSP.2013.6637907

Cover

More Information
Summary:High quality view synthesis is a prerequisite for future free-viewpoint television. It will enable viewers to move freely in a dynamic real world scene. Depth image based rendering algorithms will play a pivotal role when synthesizing an arbitrary number of novel views by using a subset of captured views and corresponding depth maps only. Usually, each depth map is estimated individually by stereo-matching algorithms and, hence, shows lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency of multiview depth imagery. First, our approach classifies the color information in the multiview color imagery by modeling color with a mixture of Dirichlet distributions where the model parameters are estimated in a Bayesian framework with variational inference. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further sub-clustering. Finally, the resulting mean of each sub-cluster is used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach improves the average quality of virtual views by up to 0.8 dB when compared to views synthesized by using conventionally estimated depth maps.
ISSN:1520-6149
DOI:10.1109/ICASSP.2013.6637907