Convolutional Neural Network‐Based CT Image Segmentation of Kidney Tumours
ABSTRACT Kidney tumours are one of the most common tumours in humans and the main current treatment is surgical removal. The CT images are usually manually segmented by a specialist for pre‐operative planning, but this can be influenced by the surgeon's experience and skill and can be time‐cons...
Saved in:
| Published in | International journal of imaging systems and technology Vol. 34; no. 4 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
Hoboken, USA
John Wiley & Sons, Inc
01.07.2024
Wiley Subscription Services, Inc |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0899-9457 1098-1098 |
| DOI | 10.1002/ima.23142 |
Cover
| Summary: | ABSTRACT
Kidney tumours are one of the most common tumours in humans and the main current treatment is surgical removal. The CT images are usually manually segmented by a specialist for pre‐operative planning, but this can be influenced by the surgeon's experience and skill and can be time‐consuming. Due to the complex lesions and different morphologies of kidney tumours that make segmentation difficult, this article proposes a convolutional neural network‐based automatic segmentation method for CT images of kidney tumours to address the most common problems of boundary blurring and false positives in tumour segmentation images. The method is highly accurate and reliable, and is used to assist doctors in surgical planning as well as diagnostic treatment, relieving medical pressure to a certain extent. The EfficientNetV2‐UNet segmentation model proposed in this article includes three main parts: feature extractor, reconstruction network and Bayesian decision algorithm. Firstly, for the phenomenon of tumour false positives, the EfficientNetV2 feature extractor, which has high training accuracy and efficiency, is selected as the backbone network, which extracts shallow features such as tumour location, morphology and texture in the CT image by downsampling. Secondly, on the basis of the backbone network, the reconstruction network is designed, which mainly consists of conversion block, deconvolution block, convolution block and output block. Then, the up‐sampling architecture is constructed to gradually recover the spatial resolution of the feature map, fully identify the contextual information and form a complete encoding–decoding structure. Multi‐scale feature fusion is achieved by superimposing all levels of feature map channels on the left and right sides of the network, preventing the loss of details and performing accurate tumour segmentation. Finally, a Bayesian decision algorithm is designed for the edge blurring phenomenon of segmented tumours and cascaded over the output of the reconstruction network, combining the edge features of the original CT image and the segmented image for probability estimation, which is used to improve the accuracy of the model edge segmentation. Medical images in NII special format were converted to Numpy matrix format using python, and then more than 2000 CT images containing only kidney tumours were selected from the KiTS19 dataset as the dataset for the model, and the dimensions were standardised to 128 × 128, and the experimental results show that the model outperforms many other advanced models with good segmentation performance. |
|---|---|
| Bibliography: | This work is supported by National Natural Science Foundation of China (61861012), Guangxi Key Laboratory of Automatic Detecting Technology and Instruments (YQ23102), the Open Project Program of Shanxi Key Laboratory of Advanced Semiconductor Optoelectronic Devices and Integrated Systems (Grant Numbers 2023SZKF10 and 2023SZKF04), Science Foundation of Guilin University of Aerospace Technology (XJ20KT09) and Research Basic Ability Improvement Project for Young and Middle‐aged Teachers of Guangxi Universities (2021KY0800). Funding ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0899-9457 1098-1098 |
| DOI: | 10.1002/ima.23142 |