A Bi-Directionally Fused Boundary Aware Network for Skin Lesion Segmentation
It is quite challenging to visually identify skin lesions with irregular shapes, blurred boundaries and large scale variances. Convolutional Neural Network (CNN) extracts more local features with abundant spatial information, while Transformer has the powerful ability to capture more global informat...
Saved in:
| Published in | IEEE transactions on image processing Vol. 33; pp. 6340 - 6353 |
|---|---|
| Main Authors | , , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
IEEE
2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1057-7149 1941-0042 1941-0042 |
| DOI | 10.1109/TIP.2024.3482864 |
Cover
| Abstract | It is quite challenging to visually identify skin lesions with irregular shapes, blurred boundaries and large scale variances. Convolutional Neural Network (CNN) extracts more local features with abundant spatial information, while Transformer has the powerful ability to capture more global information but with insufficient spatial details. To overcome the difficulties in discriminating small or blurred skin lesions, we propose a Bi-directionally Fused Boundary Aware Network (BiFBA-Net). To utilize complementary features produced by CNNs and Transformers, we design a dual-encoding structure. Different from existing dual-encoders, our method designs a Bi-directional Attention Gate (Bi-AG) with two inputs and two outputs for crosswise feature fusion. Our Bi-AG accepts two kinds of features from CNN and Transformer encoders, and two attention gates are designed to generate two attention outputs that are sent back to the two encoders. Thus, we implement adequate exchanging of multi-scale information between CNN and Transformer encoders in a bi-directional and attention way. To perfectly restore feature maps, we propose a progressive decoding structure with boundary aware, containing three decoders with six supervised losses. The first decoder is a CNN network for producing more spatial details. The second one is a Partial Decoder (PD) for aggregating high-level features with more semantics. The last one is a Boundary Aware Decoder (BAD) proposed to progressively improve boundary accuracy. Our BAD uses residual structure and Reverse Attention (RA) at different scales to deeply mine structural and spatial details for refining lesion boundaries. Extensive experiments on public datasets show that our BiFBA-Net achieves higher segmentation accuracy, and has much better ability of boundary perceptions than compared methods. It also alleviates both over-segmentation of small lesions and under-segmentation of large ones. |
|---|---|
| AbstractList | It is quite challenging to visually identify skin lesions with irregular shapes, blurred boundaries and large scale variances. Convolutional Neural Network (CNN) extracts more local features with abundant spatial information, while Transformer has the powerful ability to capture more global information but with insufficient spatial details. To overcome the difficulties in discriminating small or blurred skin lesions, we propose a Bi-directionally Fused Boundary Aware Network (BiFBA-Net). To utilize complementary features produced by CNNs and Transformers, we design a dual-encoding structure. Different from existing dual-encoders, our method designs a Bi-directional Attention Gate (Bi-AG) with two inputs and two outputs for crosswise feature fusion. Our Bi-AG accepts two kinds of features from CNN and Transformer encoders, and two attention gates are designed to generate two attention outputs that are sent back to the two encoders. Thus, we implement adequate exchanging of multi-scale information between CNN and Transformer encoders in a bi-directional and attention way. To perfectly restore feature maps, we propose a progressive decoding structure with boundary aware, containing three decoders with six supervised losses. The first decoder is a CNN network for producing more spatial details. The second one is a Partial Decoder (PD) for aggregating high-level features with more semantics. The last one is a Boundary Aware Decoder (BAD) proposed to progressively improve boundary accuracy. Our BAD uses residual structure and Reverse Attention (RA) at different scales to deeply mine structural and spatial details for refining lesion boundaries. Extensive experiments on public datasets show that our BiFBA-Net achieves higher segmentation accuracy, and has much better ability of boundary perceptions than compared methods. It also alleviates both over-segmentation of small lesions and under-segmentation of large ones.It is quite challenging to visually identify skin lesions with irregular shapes, blurred boundaries and large scale variances. Convolutional Neural Network (CNN) extracts more local features with abundant spatial information, while Transformer has the powerful ability to capture more global information but with insufficient spatial details. To overcome the difficulties in discriminating small or blurred skin lesions, we propose a Bi-directionally Fused Boundary Aware Network (BiFBA-Net). To utilize complementary features produced by CNNs and Transformers, we design a dual-encoding structure. Different from existing dual-encoders, our method designs a Bi-directional Attention Gate (Bi-AG) with two inputs and two outputs for crosswise feature fusion. Our Bi-AG accepts two kinds of features from CNN and Transformer encoders, and two attention gates are designed to generate two attention outputs that are sent back to the two encoders. Thus, we implement adequate exchanging of multi-scale information between CNN and Transformer encoders in a bi-directional and attention way. To perfectly restore feature maps, we propose a progressive decoding structure with boundary aware, containing three decoders with six supervised losses. The first decoder is a CNN network for producing more spatial details. The second one is a Partial Decoder (PD) for aggregating high-level features with more semantics. The last one is a Boundary Aware Decoder (BAD) proposed to progressively improve boundary accuracy. Our BAD uses residual structure and Reverse Attention (RA) at different scales to deeply mine structural and spatial details for refining lesion boundaries. Extensive experiments on public datasets show that our BiFBA-Net achieves higher segmentation accuracy, and has much better ability of boundary perceptions than compared methods. It also alleviates both over-segmentation of small lesions and under-segmentation of large ones. It is quite challenging to visually identify skin lesions with irregular shapes, blurred boundaries and large scale variances. Convolutional Neural Network (CNN) extracts more local features with abundant spatial information, while Transformer has the powerful ability to capture more global information but with insufficient spatial details. To overcome the difficulties in discriminating small or blurred skin lesions, we propose a Bi-directionally Fused Boundary Aware Network (BiFBA-Net). To utilize complementary features produced by CNNs and Transformers, we design a dual-encoding structure. Different from existing dual-encoders, our method designs a Bi-directional Attention Gate (Bi-AG) with two inputs and two outputs for crosswise feature fusion. Our Bi-AG accepts two kinds of features from CNN and Transformer encoders, and two attention gates are designed to generate two attention outputs that are sent back to the two encoders. Thus, we implement adequate exchanging of multi-scale information between CNN and Transformer encoders in a bi-directional and attention way. To perfectly restore feature maps, we propose a progressive decoding structure with boundary aware, containing three decoders with six supervised losses. The first decoder is a CNN network for producing more spatial details. The second one is a Partial Decoder (PD) for aggregating high-level features with more semantics. The last one is a Boundary Aware Decoder (BAD) proposed to progressively improve boundary accuracy. Our BAD uses residual structure and Reverse Attention (RA) at different scales to deeply mine structural and spatial details for refining lesion boundaries. Extensive experiments on public datasets show that our BiFBA-Net achieves higher segmentation accuracy, and has much better ability of boundary perceptions than compared methods. It also alleviates both over-segmentation of small lesions and under-segmentation of large ones. |
| Author | Peng, Yuhuan Li, Xuelong Huang, Qinghua Yuan, Feiniu |
| Author_xml | – sequence: 1 givenname: Feiniu orcidid: 0000-0003-3286-1481 surname: Yuan fullname: Yuan, Feiniu email: yfn@ustc.edu organization: College of Information, Mechanical and Electrical Engineering, Shanghai Normal University (SHNU), Shanghai, China – sequence: 2 givenname: Yuhuan orcidid: 0009-0001-0299-4630 surname: Peng fullname: Peng, Yuhuan email: 1402196323@qq.com organization: College of Humanities, SHNU, Shanghai, China – sequence: 3 givenname: Qinghua orcidid: 0000-0003-1080-6940 surname: Huang fullname: Huang, Qinghua email: qhhuang@nwpu.edu.cn organization: School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, China – sequence: 4 givenname: Xuelong orcidid: 0000-0003-2924-946X surname: Li fullname: Li, Xuelong email: xuelong_li@ieee.org organization: School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39441680$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kTtPwzAURi0EAlrYGRCyxMKS4mcSj-VRQKoAqWWOnOQGuU1jsBMh_j0uKQh1YLKHc6793W-AdhvbAEInlIwoJepy_vA8YoSJERcpS2Oxgw6pEjQiRLDdcCcyiRIq1AEaeL8ghApJ4310wJUQNE7JIZqO8ZWJboyDojW20XX9iSedhxJf2a4ptfvE4w_tAD9C-2HdElfW4dnSNHgKPgh4Bq8raFq9to_QXqVrD8ebc4heJrfz6_to-nT3cD2eRkX4ZxsJSZK0yOMEEimpFGVecSBKK820rlhBQbK8LHmuWVklcSqKvKKgVS5FxdIS-BBd9HPfnH3vwLfZyvgC6lo3YDufccpIiB6SB_R8C13YzoWc31RMUiUVDdTZhuryFZTZmzOrED372VMASA8UznrvoPpFKMnWVWShimxdRbapIijxllKYfk2t06b-TzztRQMAf95JOE85519KD5SQ |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_1002_mp_17727 |
| Cites_doi | 10.3390/s21155172 10.3322/caac.21395 10.1109/ACCESS.2022.3148402 10.1109/TIP.2019.2946126 10.1109/EMBC.2013.6610779 10.1007/978-3-030-59719-1_36 10.1109/CVPR.2015.7298965 10.1109/TIP.2023.3287038 10.1109/TPAMI.2023.3243048 10.1109/WACV51458.2022.00181 10.1016/j.media.2021.102327 10.1109/CVPR.2016.90 10.1109/TIP.2023.3270104 10.1109/ISBI.2019.8759329 10.1109/ICPR48806.2021.9413346 10.1109/TIP.2019.2910667 10.1007/978-3-030-87193-2_20 10.1109/ICCV48922.2021.00986 10.1016/j.patcog.2022.109228 10.1016/j.compbiomed.2022.106462 10.1109/TIP.2022.3195321 10.1007/978-3-031-21014-3_39 10.1007/978-3-030-00889-5_1 10.1016/j.media.2020.101716 10.1007/978-3-030-01240-3_15 10.1007/978-3-030-00937-3_84 10.1109/CVPR.2018.00813 10.1007/978-3-030-87193-2_4 10.1109/CVPR.2018.00388 10.1007/978-3-030-87193-2_2 10.1109/CBMS49503.2020.00111 10.1109/ISBI.2018.8363707 10.1016/j.patcog.2021.108076 10.1007/978-3-319-24574-4_28 10.1109/TMI.2019.2903562 10.1016/j.cmpb.2019.07.005 10.1016/j.cmpb.2022.107076 10.1007/978-3-030-00934-2_3 10.1109/CVPR.2019.00403 10.1109/JBHI.2015.2390032 10.1109/JBHI.2018.2859898 10.1016/j.cmpb.2018.05.027 10.1109/TBME.2013.2283803 10.48550/arXiv.1605.01397 10.1109/TMI.2020.2983721 10.1109/WACV56688.2023.00614 10.1109/CVPR46437.2021.00681 10.1109/TMI.2020.3035253 10.1109/TNNLS.2022.3227717 10.1109/GTSD50082.2020.9303084 10.48550/ARXIV.1706.03762 10.1109/ACCESS.2019.2943628 10.1109/TBME.2007.910651 10.1016/j.compmedimag.2019.101658 10.1109/TIP.2021.3069318 10.1007/978-3-030-59719-1_20 10.1109/ISBI.2019.8759535 10.1109/TMI.2020.2972964 10.1007/978-3-030-58452-8_13 10.48550/arXiv.1802.00368 10.48550/arXiv.2102.04306 10.1109/LGRS.2018.2802944 10.1016/j.media.2019.01.012 10.1109/ISM46123.2019.00049 10.1016/j.media.2020.101874 10.1109/ISBI.2018.8363547 10.1016/j.asoc.2020.106881 10.1109/3DV.2016.79 10.1109/CVPR42600.2020.00487 10.1109/TIP.2022.3150295 10.1109/TMI.2020.3025308 10.1016/j.compbiomed.2023.106580 10.5555/3045118.3045167 10.1016/j.compmedimag.2008.11.002 10.1007/978-3-030-87193-2_11 10.1016/j.cogsys.2018.12.008 10.48550/arXiv.1902.03368 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TIP.2024.3482864 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic PubMed Technology Research Database |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 6353 |
| ExternalDocumentID | 39441680 10_1109_TIP_2024_3482864 10733833 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62272308 funderid: 10.13039/501100001809 – fundername: National Key Research and Development Program of China grantid: 2023YFC3305802 funderid: 10.13039/501100012166 – fundername: Capacity Construction Project of Shanghai Local Colleges grantid: 23010504100 |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYXX CITATION NPM RIG 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c348t-45078cb67e755154dbf3e09a9a2aaf2c1e52bdd3ba2df7684cbf1ea9b54f28de3 |
| IEDL.DBID | RIE |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Sat Sep 27 20:27:18 EDT 2025 Mon Jun 30 12:59:32 EDT 2025 Mon Jul 21 06:05:53 EDT 2025 Wed Oct 01 02:58:55 EDT 2025 Thu Apr 24 23:04:06 EDT 2025 Wed Aug 27 03:05:12 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c348t-45078cb67e755154dbf3e09a9a2aaf2c1e52bdd3ba2df7684cbf1ea9b54f28de3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0003-2924-946X 0000-0003-3286-1481 0000-0003-1080-6940 0009-0001-0299-4630 |
| PMID | 39441680 |
| PQID | 3126089591 |
| PQPubID | 85429 |
| PageCount | 14 |
| ParticipantIDs | ieee_primary_10733833 crossref_primary_10_1109_TIP_2024_3482864 crossref_citationtrail_10_1109_TIP_2024_3482864 proquest_miscellaneous_3120057714 proquest_journals_3126089591 pubmed_primary_39441680 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 20240000 2024-00-00 20240101 |
| PublicationDateYYYYMMDD | 2024-01-01 |
| PublicationDate_xml | – year: 2024 text: 20240000 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref57 ref12 ref56 ref15 ref59 ref58 ref53 ref52 ref11 ref10 ref54 ref17 ref16 ref19 ref18 Touvron (ref55) ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 ref49 ref8 ref7 Dosovitskiy (ref14) ref9 ref4 ref3 Cao (ref77) 2021 ref6 ref5 ref40 ref80 ref35 ref79 ref34 ref78 ref37 ref36 ref31 ref75 ref30 ref74 ref33 ref32 ref76 ref2 ref1 ref39 ref38 ref71 ref70 ref73 ref72 ref24 ref68 ref23 ref67 ref26 ref25 ref69 ref20 ref64 ref63 ref22 ref66 ref21 ref65 ref28 ref27 ref29 ref60 ref62 ref61 |
| References_xml | – ident: ref65 doi: 10.3390/s21155172 – ident: ref1 doi: 10.3322/caac.21395 – ident: ref64 doi: 10.1109/ACCESS.2022.3148402 – ident: ref27 doi: 10.1109/TIP.2019.2946126 – ident: ref72 doi: 10.1109/EMBC.2013.6610779 – ident: ref63 doi: 10.1007/978-3-030-59719-1_36 – ident: ref4 doi: 10.1109/CVPR.2015.7298965 – ident: ref30 doi: 10.1109/TIP.2023.3287038 – ident: ref36 doi: 10.1109/TPAMI.2023.3243048 – ident: ref39 doi: 10.1109/WACV51458.2022.00181 – ident: ref42 doi: 10.1016/j.media.2021.102327 – ident: ref15 doi: 10.1109/CVPR.2016.90 – ident: ref43 doi: 10.1109/TIP.2023.3270104 – ident: ref52 doi: 10.1109/ISBI.2019.8759329 – ident: ref69 doi: 10.1109/ICPR48806.2021.9413346 – ident: ref29 doi: 10.1109/TIP.2019.2910667 – ident: ref70 doi: 10.1007/978-3-030-87193-2_20 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref14 article-title: An image is worth 16×16 words: Transformers for image recognition at scale – ident: ref41 doi: 10.1109/ICCV48922.2021.00986 – ident: ref40 doi: 10.1016/j.patcog.2022.109228 – ident: ref71 doi: 10.1016/j.compbiomed.2022.106462 – ident: ref32 doi: 10.1109/TIP.2022.3195321 – ident: ref78 doi: 10.1007/978-3-031-21014-3_39 – ident: ref6 doi: 10.1007/978-3-030-00889-5_1 – ident: ref26 doi: 10.1016/j.media.2020.101716 – ident: ref47 doi: 10.1007/978-3-030-01240-3_15 – ident: ref60 doi: 10.1007/978-3-030-00937-3_84 – ident: ref13 doi: 10.1109/CVPR.2018.00813 – ident: ref37 doi: 10.1007/978-3-030-87193-2_4 – ident: ref74 doi: 10.1109/CVPR.2018.00388 – ident: ref51 doi: 10.1007/978-3-030-87193-2_2 – ident: ref10 doi: 10.1109/CBMS49503.2020.00111 – ident: ref61 doi: 10.1109/ISBI.2018.8363707 – ident: ref33 doi: 10.1016/j.patcog.2021.108076 – ident: ref5 doi: 10.1007/978-3-319-24574-4_28 – ident: ref73 doi: 10.1109/TMI.2019.2903562 – ident: ref67 doi: 10.1016/j.cmpb.2019.07.005 – start-page: 10347 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref55 article-title: Training data-efficient image transformers & distillation through attention – ident: ref59 doi: 10.1016/j.cmpb.2022.107076 – ident: ref21 doi: 10.1007/978-3-030-00934-2_3 – year: 2021 ident: ref77 article-title: Swin-Unet: Unet-like pure transformer for medical image segmentation publication-title: arXiv:2105.05537 – ident: ref44 doi: 10.1109/CVPR.2019.00403 – ident: ref2 doi: 10.1109/JBHI.2015.2390032 – ident: ref80 doi: 10.1109/JBHI.2018.2859898 – ident: ref19 doi: 10.1016/j.cmpb.2018.05.027 – ident: ref18 doi: 10.1109/TBME.2013.2283803 – ident: ref48 doi: 10.48550/arXiv.1605.01397 – ident: ref56 doi: 10.1109/TMI.2020.2983721 – ident: ref79 doi: 10.1109/WACV56688.2023.00614 – ident: ref45 doi: 10.1109/CVPR46437.2021.00681 – ident: ref16 doi: 10.1109/TMI.2020.3035253 – ident: ref31 doi: 10.1109/TNNLS.2022.3227717 – ident: ref53 doi: 10.1109/GTSD50082.2020.9303084 – ident: ref12 doi: 10.48550/ARXIV.1706.03762 – ident: ref22 doi: 10.1109/ACCESS.2019.2943628 – ident: ref17 doi: 10.1109/TBME.2007.910651 – ident: ref24 doi: 10.1016/j.compmedimag.2019.101658 – ident: ref34 doi: 10.1109/TIP.2021.3069318 – ident: ref8 doi: 10.1007/978-3-030-59719-1_20 – ident: ref11 doi: 10.1109/ISBI.2019.8759535 – ident: ref25 doi: 10.1109/TMI.2020.2972964 – ident: ref35 doi: 10.1007/978-3-030-58452-8_13 – ident: ref58 doi: 10.48550/arXiv.1802.00368 – ident: ref76 doi: 10.48550/arXiv.2102.04306 – ident: ref9 doi: 10.1109/LGRS.2018.2802944 – ident: ref23 doi: 10.1016/j.media.2019.01.012 – ident: ref57 doi: 10.1109/ISM46123.2019.00049 – ident: ref75 doi: 10.1016/j.media.2020.101874 – ident: ref49 doi: 10.1109/ISBI.2018.8363547 – ident: ref54 doi: 10.1016/j.asoc.2020.106881 – ident: ref7 doi: 10.1109/3DV.2016.79 – ident: ref68 doi: 10.1109/CVPR42600.2020.00487 – ident: ref28 doi: 10.1109/TIP.2022.3150295 – ident: ref62 doi: 10.1109/TMI.2020.3025308 – ident: ref66 doi: 10.1016/j.compbiomed.2023.106580 – ident: ref46 doi: 10.5555/3045118.3045167 – ident: ref20 doi: 10.1016/j.compmedimag.2008.11.002 – ident: ref38 doi: 10.1007/978-3-030-87193-2_11 – ident: ref3 doi: 10.1016/j.cogsys.2018.12.008 – ident: ref50 doi: 10.48550/arXiv.1902.03368 |
| SSID | ssj0014516 |
| Score | 2.457393 |
| Snippet | It is quite challenging to visually identify skin lesions with irregular shapes, blurred boundaries and large scale variances. Convolutional Neural Network... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 6340 |
| SubjectTerms | Accuracy Artificial neural networks Attention Bidirectional control Biomedical imaging Boundaries CNN Coders Convolutional neural networks Decoders Decoding deep learning Design Feature extraction Feature maps Image segmentation Lesions Segmentation Semantics Skin Skin lesion segmentation Spatial data transformer Transformers |
| Title | A Bi-Directionally Fused Boundary Aware Network for Skin Lesion Segmentation |
| URI | https://ieeexplore.ieee.org/document/10733833 https://www.ncbi.nlm.nih.gov/pubmed/39441680 https://www.proquest.com/docview/3126089591 https://www.proquest.com/docview/3120057714 |
| Volume | 33 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Rb9MwED6xPcEDgzEgMJCReOEhXRPHdvzYIaqBRoXEJu0tsp0zmhgpahuh8evxOU5VIQ3xFil24vju4u_z-e4A3loZYG9diLwuTCAogX_lplI8b7krZeDP3EgKcP68kGeX1acrcZWC1WMsDCLGw2c4ocvoy2-XrqetsmDhihgV34M9VcshWGvrMqCKs9G1KVSuAu4ffZJTfXLx8UtggmU1oUwutaRaPBQPWkhKBrmzHMX6KndDzbjkzA9gMQ52OGnyfdJv7MT9_iuP439_zSN4mMAnmw3a8hjuYXcIBwmIsmTm60N4sJOl8Amcz9jpdZ7-jQTcb27ZvF-HHqexJtPqls1-mRWyxXCknAUczKioFztH2otjX_HbjxTi1B3B5fzDxfuzPBVhyF2YrU1eBcBYOysVqgCuRNVaz3GqjTalMb50BYrSti23pmw9efWc9QUabUXly7pF_hT2u2WHz4E5XxFDmbZWy0p6rwV645QwHKVS2mdwMsqicSlDORXKuGkiU5nqJgiyIUE2SZAZvNv2-Dlk5_hH2yOSwU67YfozOB7l3ST7XTe8CDyv1kIXGbzZ3g6WR-4U0-Gyj20olDfoWgbPBj3ZPnxUrxd3vPQl3KexDXs5x7C_WfX4KqCbjX0dtfoPjojxbg |
| linkProvider | IEEE |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fb9MwED7BeAAeGIwxAgOMxAsP6Zr4R-LHblrVQVch0Ul7i2zHRtO2FLWN0Pjr8TlOVSEN8RYpduL47uLv8_nuAD5p4WFvmfG0zJQnKJ5_pYoVNK2pyYXnz1QJDHA-n4nJBftyyS9jsHqIhbHWhsNndoCXwZdfL0yLW2XewgtkVPQhPOKMMd6Fa22cBlhzNjg3eZEWHvn3XsmhPJqfffNcMGcDzOVSCqzGgxGhmcB0kFsLUqiwcj_YDIvOeBdm_XC7sybXg3atB-b3X5kc__t7nsOzCD_JqNOXF_DANnuwG6EoiYa-2oOnW3kKX8J0RI6v0vh3ROh-c0fG7cr3OA5VmZZ3ZPRLLS2ZdYfKiUfCBMt6kanF3Tjy3f64jUFOzT5cjE_nJ5M0lmFIjZ-tdco8ZCyNFoUtPLzirNaO2qFUUuVKudxklue6rqlWee3Qr2e0y6ySmjOXl7Wlr2CnWTT2NRDjGHKUYa2lYMI5ya1TpuCKWlEU0iVw1MuiMjFHOZbKuKkCVxnKyguyQkFWUZAJfN70-Nnl5_hH232UwVa7bvoTOOzlXUULXlU080yvlFxmCXzc3Pa2hw4V1dhFG9pgMK_XtQQOOj3ZPLxXrzf3vPQDPJ7Mz6fV9Gz29S08wXF2OzuHsLNetvadxzpr_T5o-B9ZavS7 |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Bi-Directionally+Fused+Boundary+Aware+Network+for+Skin+Lesion+Segmentation&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Yuan%2C+Feiniu&rft.au=Peng%2C+Yuhuan&rft.au=Huang%2C+Qinghua&rft.au=Li%2C+Xuelong&rft.date=2024&rft.pub=IEEE&rft.issn=1057-7149&rft.volume=33&rft.spage=6340&rft.epage=6353&rft_id=info:doi/10.1109%2FTIP.2024.3482864&rft_id=info%3Apmid%2F39441680&rft.externalDocID=10733833 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |