Multiple instance learning of deep convolutional neural networks for breast histopathology whole slide classification

Whole slide histopathology images are the digitised version of physical slides that are acquired by stitching multiple overlapping image patches acquired at different optical magnifications. Typically sized at 2 GB per slide, computer aided analysis for whole slide pathology classification is a chal...

Full description

Saved in:
Bibliographic Details
Published inProceedings (International Symposium on Biomedical Imaging) pp. 578 - 581
Main Authors Das, Kausik, Conjeti, Sailesh, Roy, Abhijit Guha, Chatterjee, Jyotirmoy, Sheet, Debdoot
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.04.2018
Subjects
Online AccessGet full text
ISSN1945-8452
DOI10.1109/ISBI.2018.8363642

Cover

More Information
Summary:Whole slide histopathology images are the digitised version of physical slides that are acquired by stitching multiple overlapping image patches acquired at different optical magnifications. Typically sized at 2 GB per slide, computer aided analysis for whole slide pathology classification is a challenging task. Since local image patches generally provide more discriminative information as compared to the whole slide, challenges are associated with handling pathological heteregeneity varying across the whole slide. Given this challenge to classify the slide pathology using local image patches, in this paper we propose a multiple instance learning (MIL) framework for convolutional neural network (CNN). We introduce a new pooling layer that helps to aggregate most informative features from patches constituting a whole slide, without necessitating inter-patch overlap or global slide coverage. This helps our method to jointly learn to discover informative features locally as well as learn the classification margin globally; without the explicit need for individually annotating each local image patch in the training data. We have experimentally evaluated performance using a patient level 5-folded cross-validation with 58 malignant and 24 benign cases of breast tumors to obtain best performance of 89.52%, 89.06%, 88.84%, 87.67% accuracy at 40×, 100×, 200 × and 400× magnifications respectively while processing each slide in under 40 ms.
ISSN:1945-8452
DOI:10.1109/ISBI.2018.8363642