Predicting Semantic Descriptions from Medical Images with Convolutional Neural Networks
Learning representative computational models from medical imaging data requires large training data sets. Often, voxel-level annotation is unfeasible for sufficient amounts of data. An alternative to manual annotation, is to use the enormous amount of knowledge encoded in imaging data and correspond...
Saved in:
| Published in | Information Processing in Medical Imaging Vol. 24; pp. 437 - 448 |
|---|---|
| Main Authors | , , , , |
| Format | Book Chapter Journal Article |
| Language | English |
| Published |
Cham
Springer International Publishing
2015
|
| Series | Lecture Notes in Computer Science |
| Subjects | |
| Online Access | Get full text |
| ISBN | 9783319199917 3319199919 |
| ISSN | 0302-9743 1011-2499 1611-3349 |
| DOI | 10.1007/978-3-319-19992-4_34 |
Cover
| Summary: | Learning representative computational models from medical imaging data requires large training data sets. Often, voxel-level annotation is unfeasible for sufficient amounts of data. An alternative to manual annotation, is to use the enormous amount of knowledge encoded in imaging data and corresponding reports generated during clinical routine. Weakly supervised learning approaches can link volume-level labels to image content but suffer from the typical label distributions in medical imaging data where only a small part consists of clinically relevant abnormal structures. In this paper we propose to use a semantic representation of clinical reports as a learning target that is predicted from imaging data by a convolutional neural network. We demonstrate how we can learn accurate voxel-level classifiers based on weak volume-level semantic descriptions on a set of 157 optical coherence tomography (OCT) volumes. We specifically show how semantic information increases classification accuracy for intraretinal cystoid fluid (IRC), subretinal fluid (SRF) and normal retinal tissue, and how the learning algorithm links semantic concepts to image content and geometry. |
|---|---|
| Bibliography: | T. Schlegl—This work has received funding from the European Union FP7 (KHRESMOI FP7-257528, VISCERAL FP7-318068) and the Austrian Federal Ministry of Science, Research and Economy. |
| ISBN: | 9783319199917 3319199919 |
| ISSN: | 0302-9743 1011-2499 1611-3349 |
| DOI: | 10.1007/978-3-319-19992-4_34 |