Classification of White Blood Cells with PatternNet-fused Ensemble of Convolutional Neural Networks (PECNN)

The traditional process of the complete blood count test has long provided hematologists with information to diagnose blood-based disorders. However, this tedious and manual process can be subject to bias and inaccurate classifications. As a result, automated methods of white-blood cell detection an...

Full description

Saved in:
Bibliographic Details
Published in2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) pp. 325 - 330
Main Authors Wang, Justin L., Li, Anthony Y., Huang, Michelle, Ibrahim, Ali K., Hanqi Zhuang, Ali, Ali Muhamed
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2018
Subjects
Online AccessGet full text
DOI10.1109/ISSPIT.2018.8642630

Cover

More Information
Summary:The traditional process of the complete blood count test has long provided hematologists with information to diagnose blood-based disorders. However, this tedious and manual process can be subject to bias and inaccurate classifications. As a result, automated methods of white-blood cell detection and counting have been sought out as a means to facilitate the process and improve classification accuracy. In this paper, we present a new method, PatternNet-fused Ensemble of Convolutional Neural Networks (PECNN) for classifying white blood cells. The proposed architecture relies on an ensemble method, PatternNet, to fuse outputs of n randomly generated Convolutional Neural Networks. The reliance on randomly generated structures allows the proposed algorithm to be adaptive to data, generalizing its applications. The PatternNet captures the strengths of each participating model while being insensitive to outliers. Through our experimental procedure, we were able to show that our ensemble model outperformed other ensemble models even in midst of noisy data. We were also able to show that the proposed architecture performs as well as a much more sophisticated deep network with a much less computational cost.
DOI:10.1109/ISSPIT.2018.8642630