mm3DSNet: multi-scale and multi-feedforward self-attention 3D segmentation network for CT scans of hepatobiliary ducts
Image segmentation is a key step of the 3D reconstruction of the hepatobiliary duct tree, which is significant for preoperative planning. In this paper, a novel 3D U-Net variant is designed for CT image segmentation of hepatobiliary ducts from the abdominal CT scans, which is composed of a 3D encode...
Saved in:
| Published in | Medical & biological engineering & computing Vol. 63; no. 1; pp. 127 - 138 |
|---|---|
| Main Authors | , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.01.2025
Springer Nature B.V |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0140-0118 1741-0444 1741-0444 |
| DOI | 10.1007/s11517-024-03183-z |
Cover
| Summary: | Image segmentation is a key step of the 3D reconstruction of the hepatobiliary duct tree, which is significant for preoperative planning. In this paper, a novel 3D U-Net variant is designed for CT image segmentation of hepatobiliary ducts from the abdominal CT scans, which is composed of a 3D encoder-decoder and a 3D multi-feedforward self-attention module (MFSAM). To well sufficient semantic and spatial features with high inference speed, the 3D ConvNeXt block is designed as the 3D extension of the 2D ConvNeXt. To improve the ability of semantic feature extraction, the MFSAM is designed to transfer the semantic and spatial features at different scales from the encoder to the decoder. Also, to balance the losses for the voxels and the edges of the hepatobiliary ducts, a boundary-aware overlap cross-entropy loss is proposed by combining the cross-entropy loss, the Dice loss, and the boundary loss. Experimental results indicate that the proposed method is superior to some existing deep networks as well as the radiologist without rich experience in terms of CT segmentation of hepatobiliary ducts, with a segmentation performance of 76.54% Dice and 6.56 HD.
Graphical Abstract |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 0140-0118 1741-0444 1741-0444 |
| DOI: | 10.1007/s11517-024-03183-z |