Breast Cancer Classification Using Transfer Learning

Cancer is one of the most lethal forms of the disease. And in females, breast cancer is the most common cancer which could even lead to death if not properly diagnosed. Over the years, a lot of advancement can be seen in the field of medical technology but when it comes to detecting breast cancer bi...

Full description

Saved in:
Bibliographic Details
Published inEvolving Technologies for Computing, Communication and Smart World Vol. 694; pp. 425 - 436
Main Authors Seemendra, Animesh, Singh, Rahul, Singh, Sukhendra
Format Book Chapter
LanguageEnglish
Published Singapore Springer 2020
Springer Singapore
SeriesLecture Notes in Electrical Engineering
Subjects
Online AccessGet full text
ISBN9789811578038
9811578036
ISSN1876-1100
1876-1119
DOI10.1007/978-981-15-7804-5_32

Cover

More Information
Summary:Cancer is one of the most lethal forms of the disease. And in females, breast cancer is the most common cancer which could even lead to death if not properly diagnosed. Over the years, a lot of advancement can be seen in the field of medical technology but when it comes to detecting breast cancer biopsy is the only way. Pathologists detect cancer by using histological images under the microscope. Inspecting cancer visually is a critical task; it requires a lot of attention, skill and is time-consuming. Therefore, there is a need for a faster and efficient system for detecting breast cancer. Advancements in the field of machine learning and image processing lead to multiple types of research for creating an efficient partially or fully computer monitored diagnosis system. In this paper, we have used histological images to detect and classify invasive ductal carcinoma. Our approach involves convolutional neural networks which are a very advanced and efficient technique when dealing with images in machine learning. We compared various famous deep learning models, and we used these pre-trained CNN architectures with fine-tuning to provide an efficient solution. We also used image augmentation to further improve the efficiency of the solution. In this study, we used VGG, ResNet, DenseNet, MobileNet, EfficientNet. The best result we got was using fine-tuned VGG19 and with proper image augmentation. We achieved a sensitivity of 93.05% and a precision of 94.46 with the mentioned architecture. We improved the F-Score of the latest researches by 10.2%. We have achieved an accuracy of 86.97% using a pre-trained DenseNet model which is greater than the latest researches that achieved 85.41% [30] accuracy.
ISBN:9789811578038
9811578036
ISSN:1876-1100
1876-1119
DOI:10.1007/978-981-15-7804-5_32