Generative Adversarial Network for Medical Images (MI-GAN)

Deep learning algorithms produces state-of-the-art results for different machine learning and computer vision tasks. To perform well on a given task, these algorithms require large dataset for training. However, deep learning algorithms lack generalization and suffer from over-fitting whenever train...

Full description

Saved in:
Bibliographic Details
Published inJournal of medical systems Vol. 42; no. 11; pp. 231 - 11
Main Authors Iqbal, Talha, Ali, Hazrat
Format Journal Article
LanguageEnglish
Published New York Springer US 01.11.2018
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0148-5598
1573-689X
1573-689X
DOI10.1007/s10916-018-1072-9

Cover

More Information
Summary:Deep learning algorithms produces state-of-the-art results for different machine learning and computer vision tasks. To perform well on a given task, these algorithms require large dataset for training. However, deep learning algorithms lack generalization and suffer from over-fitting whenever trained on small dataset, especially when one is dealing with medical images. For supervised image analysis in medical imaging, having image data along with their corresponding annotated ground-truths is costly as well as time consuming since annotations of the data is done by medical experts manually. In this paper, we propose a new Generative Adversarial Network for Medical Imaging (MI-GAN). The MI-GAN generates synthetic medical images and their segmented masks, which can then be used for the application of supervised analysis of medical images. Particularly, we present MI-GAN for synthesis of retinal images. The proposed method generates precise segmented images better than the existing techniques. The proposed model achieves a dice coefficient of 0.837 on STARE dataset and 0.832 on DRIVE dataset which is state-of-the-art performance on both the datasets.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0148-5598
1573-689X
1573-689X
DOI:10.1007/s10916-018-1072-9