Segmenting Brain Tumor Using Cascaded V-Nets in Multimodal MR Images

In this work, we propose a novel cascaded V-Nets method to segment brain tumor substructures in multimodal brain magnetic resonance imaging. Although V-Net has been successfully used in many segmentation tasks, we demonstrate that its performance could be further enhanced by using a cascaded structu...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in computational neuroscience Vol. 14; p. 9
Main Authors Hua, Rui, Huo, Quan, Gao, Yaozong, Sui, He, Zhang, Bing, Sun, Yu, Mo, Zhanhao, Shi, Feng
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 14.02.2020
Frontiers Media S.A
Subjects
Online AccessGet full text
ISSN1662-5188
1662-5188
DOI10.3389/fncom.2020.00009

Cover

More Information
Summary:In this work, we propose a novel cascaded V-Nets method to segment brain tumor substructures in multimodal brain magnetic resonance imaging. Although V-Net has been successfully used in many segmentation tasks, we demonstrate that its performance could be further enhanced by using a cascaded structure and ensemble strategy. Briefly, our baseline V-Net consists of four levels with encoding and decoding paths and intra- and inter-path skip connections. Focal loss is chosen to improve performance on hard samples as well as balance the positive and negative samples. We further propose three preprocessing pipelines for multimodal magnetic resonance images to train different models. By ensembling the segmentation probability maps obtained from these models, segmentation result is further improved. In other hand, we propose to segment the whole tumor first, and then divide it into tumor necrosis, edema, and enhancing tumor. Experimental results on BraTS 2018 online validation set achieve average Dice scores of 0.9048, 0.8364, and 0.7748 for whole tumor, tumor core and enhancing tumor, respectively. The corresponding values for BraTS 2018 online testing set are 0.8761, 0.7953, and 0.7364, respectively. We also evaluate the proposed method in two additional data sets from local hospitals comprising of 28 and 28 subjects, and the best results are 0.8635, 0.8036, and 0.7217, respectively. We further make a prediction of patient overall survival by ensembling multiple classifiers for long, mid and short groups, and achieve accuracy of 0.519, mean square error of 367240 and Spearman correlation coefficient of 0.168 for BraTS 2018 online testing set.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Edited by: Spyridon Bakas, University of Pennsylvania, United States
Reviewed by: Eranga Ukwatta, Johns Hopkins University, United States; Siddhesh Pravin Thakur, University of Pennsylvania, United States; Anahita Fathi Kazerooni, University of Pennsylvania, United States
ISSN:1662-5188
1662-5188
DOI:10.3389/fncom.2020.00009