Deep learning-based liver segmentation for fusion-guided intervention
Purpose Tumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement procedures, image fusion (e.g. with MRI, PET, or contrast CT) is often used as image guidance when the tumor is not directly visible in CT. In...
Saved in:
| Published in | International journal for computer assisted radiology and surgery Vol. 15; no. 6; pp. 963 - 972 |
|---|---|
| Main Authors | , , , |
| Format | Journal Article |
| Language | English |
| Published |
Cham
Springer International Publishing
01.06.2020
Springer Nature B.V |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1861-6410 1861-6429 1861-6429 |
| DOI | 10.1007/s11548-020-02147-6 |
Cover
| Abstract | Purpose
Tumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement procedures, image fusion (e.g. with MRI, PET, or contrast CT) is often used as image guidance when the tumor is not directly visible in CT. In order to achieve image fusion, interventional CT image needs to be registered to an imaging modality, in which the tumor is visible. However, multi-modality image registration is a very challenging problem. In this work, we develop a deep learning-based liver segmentation algorithm and use the segmented surfaces to assist image fusion with the applications in guided needle placement procedures for diagnosing and treating liver tumors.
Methods
The developed segmentation method integrates multi-scale input and multi-scale output features in one single network for context information abstraction. The automatic segmentation results are used to register an interventional CT with a diagnostic image. The registration helps visualize the target and guide the interventional operation.
Results
The segmentation results demonstrated that the developed segmentation method is highly accurate with Dice of 96.1% on 70 CT scans provided by LiTS challenge. The segmentation algorithm is then applied to a set of images acquired for liver tumor intervention for surface-based image fusion. The effectiveness of the proposed methods is demonstrated through a number of clinical cases.
Conclusion
Our study shows that deep learning-based image segmentation can obtain useful results to help image fusion for interventional guidance. Such a technique may lead to a number of other potential applications. |
|---|---|
| AbstractList | PurposeTumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement procedures, image fusion (e.g. with MRI, PET, or contrast CT) is often used as image guidance when the tumor is not directly visible in CT. In order to achieve image fusion, interventional CT image needs to be registered to an imaging modality, in which the tumor is visible. However, multi-modality image registration is a very challenging problem. In this work, we develop a deep learning-based liver segmentation algorithm and use the segmented surfaces to assist image fusion with the applications in guided needle placement procedures for diagnosing and treating liver tumors.MethodsThe developed segmentation method integrates multi-scale input and multi-scale output features in one single network for context information abstraction. The automatic segmentation results are used to register an interventional CT with a diagnostic image. The registration helps visualize the target and guide the interventional operation.ResultsThe segmentation results demonstrated that the developed segmentation method is highly accurate with Dice of 96.1% on 70 CT scans provided by LiTS challenge. The segmentation algorithm is then applied to a set of images acquired for liver tumor intervention for surface-based image fusion. The effectiveness of the proposed methods is demonstrated through a number of clinical cases.ConclusionOur study shows that deep learning-based image segmentation can obtain useful results to help image fusion for interventional guidance. Such a technique may lead to a number of other potential applications. Tumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement procedures, image fusion (e.g. with MRI, PET, or contrast CT) is often used as image guidance when the tumor is not directly visible in CT. In order to achieve image fusion, interventional CT image needs to be registered to an imaging modality, in which the tumor is visible. However, multi-modality image registration is a very challenging problem. In this work, we develop a deep learning-based liver segmentation algorithm and use the segmented surfaces to assist image fusion with the applications in guided needle placement procedures for diagnosing and treating liver tumors.PURPOSETumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement procedures, image fusion (e.g. with MRI, PET, or contrast CT) is often used as image guidance when the tumor is not directly visible in CT. In order to achieve image fusion, interventional CT image needs to be registered to an imaging modality, in which the tumor is visible. However, multi-modality image registration is a very challenging problem. In this work, we develop a deep learning-based liver segmentation algorithm and use the segmented surfaces to assist image fusion with the applications in guided needle placement procedures for diagnosing and treating liver tumors.The developed segmentation method integrates multi-scale input and multi-scale output features in one single network for context information abstraction. The automatic segmentation results are used to register an interventional CT with a diagnostic image. The registration helps visualize the target and guide the interventional operation.METHODSThe developed segmentation method integrates multi-scale input and multi-scale output features in one single network for context information abstraction. The automatic segmentation results are used to register an interventional CT with a diagnostic image. The registration helps visualize the target and guide the interventional operation.The segmentation results demonstrated that the developed segmentation method is highly accurate with Dice of 96.1% on 70 CT scans provided by LiTS challenge. The segmentation algorithm is then applied to a set of images acquired for liver tumor intervention for surface-based image fusion. The effectiveness of the proposed methods is demonstrated through a number of clinical cases.RESULTSThe segmentation results demonstrated that the developed segmentation method is highly accurate with Dice of 96.1% on 70 CT scans provided by LiTS challenge. The segmentation algorithm is then applied to a set of images acquired for liver tumor intervention for surface-based image fusion. The effectiveness of the proposed methods is demonstrated through a number of clinical cases.Our study shows that deep learning-based image segmentation can obtain useful results to help image fusion for interventional guidance. Such a technique may lead to a number of other potential applications.CONCLUSIONOur study shows that deep learning-based image segmentation can obtain useful results to help image fusion for interventional guidance. Such a technique may lead to a number of other potential applications. Tumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement procedures, image fusion (e.g. with MRI, PET, or contrast CT) is often used as image guidance when the tumor is not directly visible in CT. In order to achieve image fusion, interventional CT image needs to be registered to an imaging modality, in which the tumor is visible. However, multi-modality image registration is a very challenging problem. In this work, we develop a deep learning-based liver segmentation algorithm and use the segmented surfaces to assist image fusion with the applications in guided needle placement procedures for diagnosing and treating liver tumors. The developed segmentation method integrates multi-scale input and multi-scale output features in one single network for context information abstraction. The automatic segmentation results are used to register an interventional CT with a diagnostic image. The registration helps visualize the target and guide the interventional operation. The segmentation results demonstrated that the developed segmentation method is highly accurate with Dice of 96.1% on 70 CT scans provided by LiTS challenge. The segmentation algorithm is then applied to a set of images acquired for liver tumor intervention for surface-based image fusion. The effectiveness of the proposed methods is demonstrated through a number of clinical cases. Our study shows that deep learning-based image segmentation can obtain useful results to help image fusion for interventional guidance. Such a technique may lead to a number of other potential applications. Purpose Tumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement procedures, image fusion (e.g. with MRI, PET, or contrast CT) is often used as image guidance when the tumor is not directly visible in CT. In order to achieve image fusion, interventional CT image needs to be registered to an imaging modality, in which the tumor is visible. However, multi-modality image registration is a very challenging problem. In this work, we develop a deep learning-based liver segmentation algorithm and use the segmented surfaces to assist image fusion with the applications in guided needle placement procedures for diagnosing and treating liver tumors. Methods The developed segmentation method integrates multi-scale input and multi-scale output features in one single network for context information abstraction. The automatic segmentation results are used to register an interventional CT with a diagnostic image. The registration helps visualize the target and guide the interventional operation. Results The segmentation results demonstrated that the developed segmentation method is highly accurate with Dice of 96.1% on 70 CT scans provided by LiTS challenge. The segmentation algorithm is then applied to a set of images acquired for liver tumor intervention for surface-based image fusion. The effectiveness of the proposed methods is demonstrated through a number of clinical cases. Conclusion Our study shows that deep learning-based image segmentation can obtain useful results to help image fusion for interventional guidance. Such a technique may lead to a number of other potential applications. |
| Author | Fang, Xi Wood, Bradford J. Xu, Sheng Yan, Pingkun |
| Author_xml | – sequence: 1 givenname: Xi surname: Fang fullname: Fang, Xi organization: Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute – sequence: 2 givenname: Sheng surname: Xu fullname: Xu, Sheng organization: Center for Interventional Oncology, Radiology and Imaging Sciences, National Institutes of Health – sequence: 3 givenname: Bradford J. surname: Wood fullname: Wood, Bradford J. organization: Center for Interventional Oncology, Radiology and Imaging Sciences, National Institutes of Health – sequence: 4 givenname: Pingkun surname: Yan fullname: Yan, Pingkun email: yanp2@rpi.edu organization: Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/32314228$$D View this record in MEDLINE/PubMed |
| BookMark | eNqNkUtP3DAUha2Kivcf6AJFYsMm9PoRO9lUqoBCJaRuYG15nJvUKGMPdjIV_76mGaCwQCwsW_Y5R8ff3SNbPngk5AuFUwqgviZKK1GXwCAvKlQpP5FdWktaSsGareczhR2yl9IdgKgUr7bJDmecCsbqXXJxjrgqBjTRO9-XC5OwLQa3xlgk7JfoRzO64IsuxKKbUj6W_eTaLHJ-xLjOgnx3QD53Zkh4uNn3ye2Pi5uzq_L61-XPs-_XpRVKjGVjmeRV3dIWWtFWtFNgzGIhG25RYW2laGvAWlgAo5SgjFYKKs5RIu-oNHyf8Dl38ivz8McMg15FtzTxQVPQj1D0DEVnKPofFC2z69vsWk2LJbY2d47mxRmM069fvPut-7DWikPVKJoDTjYBMdxPmEa9dMniMBiPYUqa8YZDxqxElh6_kd6FKfoMRTOR6yjg8Bh49H-j5ypPc8mCehbYGFKK2Gnr5knkgm54_7fsjfVDiDZgUxb7HuNL7XdcfwFlWcD4 |
| CitedBy_id | crossref_primary_10_35712_aig_v3_i2_28 crossref_primary_10_1016_j_engappai_2022_105532 crossref_primary_10_1007_s11633_021_1313_0 crossref_primary_10_1016_j_prime_2024_100632 crossref_primary_10_1007_s12072_021_10229_z crossref_primary_10_12677_ACM_2023_1371620 crossref_primary_10_1007_s11042_024_18388_5 crossref_primary_10_1016_j_artmed_2023_102557 crossref_primary_10_1016_j_jvir_2022_06_003 crossref_primary_10_1016_j_compbiomed_2024_109459 crossref_primary_10_1007_s42979_024_03047_1 crossref_primary_10_1109_JBHI_2023_3278079 crossref_primary_10_1007_s00530_023_01056_3 crossref_primary_10_1002_acm2_13482 crossref_primary_10_1007_s11604_024_01668_3 crossref_primary_10_1111_1754_9485_13609 crossref_primary_10_1371_journal_pone_0275033 crossref_primary_10_1186_s43066_023_00293_5 crossref_primary_10_1007_s10751_024_01927_9 crossref_primary_10_3390_cancers13092162 crossref_primary_10_1007_s11912_021_01054_6 crossref_primary_10_3934_mbe_2022655 crossref_primary_10_3390_diagnostics11101806 |
| Cites_doi | 10.1109/TPAMI.2015.2389824 10.1109/TMI.2018.2845918 10.1109/42.736029 10.1007/s00270-012-0446-5 10.1007/s10406-005-0130-9 10.1109/34.121791 10.1016/S1361-8415(01)80026-8 10.1371/journal.pone.0117688 10.1016/j.jvir.2017.10.021 10.1109/CVPR.2017.243 10.1007/978-3-319-24574-4_28 10.1007/978-3-030-00934-2_94 10.1109/CVPR.2016.90 10.1109/ISBI.2019.8759555 10.1109/ISBI.2018.8363817 10.1007/978-1-4614-7657-3_19 |
| ContentType | Journal Article |
| Copyright | CARS 2020 CARS 2020. |
| Copyright_xml | – notice: CARS 2020 – notice: CARS 2020. |
| DBID | AAYXX CITATION NPM K9. 7X8 5PM ADTOC UNPAY |
| DOI | 10.1007/s11548-020-02147-6 |
| DatabaseName | CrossRef PubMed ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic PubMed Central (Full Participant titles) Unpaywall for CDI: Periodical Content Unpaywall |
| DatabaseTitle | CrossRef PubMed ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic |
| DatabaseTitleList | ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: UNPAY name: Unpaywall url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/ sourceTypes: Open Access Repository |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Medicine Computer Science |
| EISSN | 1861-6429 |
| EndPage | 972 |
| ExternalDocumentID | oai:pubmedcentral.nih.gov:7305971 PMC7305971 32314228 10_1007_s11548_020_02147_6 |
| Genre | Journal Article |
| GrantInformation_xml | – fundername: National Institute of Biomedical Imaging and Bioengineering grantid: R21EB028001; R01EB027898 funderid: http://dx.doi.org/10.13039/100000070 – fundername: NIBIB NIH HHS grantid: R21EB028001 – fundername: NIBIB NIH HHS grantid: R01EB027898 |
| GroupedDBID | --- -5E -5G -BR -EM -Y2 -~C .86 .VR 06C 06D 0R~ 0VY 1N0 203 29J 29~ 2J2 2JN 2JY 2KG 2KM 2LR 2VQ 2~H 30V 4.4 406 408 409 40D 40E 53G 5GY 5VS 67Z 6NX 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANXM AANZL AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABIPD ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABOCM ABPLI ABQBU ABQSL ABSXP ABTEG ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACDTI ACGFS ACHSB ACHXU ACKNC ACMLO ACOKC ACOMO ACPIV ACSNA ACZOJ ADHHG ADHIR ADINQ ADJJI ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETCA AETLH AEVLU AEXYK AFBBN AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHIZS AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ AKMHD ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR ARMRJ ASPBG AVWKF AVXWI AXYYD AZFZN B-. BA0 BDATZ BGNMA BSONS CAG COF CS3 CSCUP DNIVK DPUIP EBD EBLON EBS EIOEI EJD EMOBN EN4 ESBYG F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC G-Y G-Z GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HF~ HG5 HG6 HLICF HMJXF HQYDN HRMNR HVGLF HZ~ IHE IJ- IKXTQ IMOTQ IWAJR IXC IXD IXE IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ KDC KOV KPH LLZTM M4Y MA- N2Q N9A NPVJJ NQJWS NU0 O9- O93 O9I O9J OAM P2P P9S PF0 PT4 QOR QOS R89 R9I RNS ROL RPX RSV S16 S1Z S27 S37 S3B SAP SDH SHX SISQX SJYHP SMD SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW SSXJD STPWE SV3 SZ9 SZN T13 TSG TSK TSV TT1 TUC U2A U9L UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WJK WK8 YLTOR Z45 Z7R Z7V Z7X Z82 Z83 Z87 Z88 ZMTXR ZOVNA ~A9 AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION NPM K9. 7X8 5PM ADTOC BGLVJ CCPQU HMCUK PHGZM PHGZT PJZUB PPXIY PQGLB UKHRP UNPAY |
| ID | FETCH-LOGICAL-c474t-9c26358d1d0d4d51f70aabb693ce7e8c64d80e84c00a774121570533e6e3f16a3 |
| IEDL.DBID | UNPAY |
| ISSN | 1861-6410 1861-6429 |
| IngestDate | Sun Oct 26 03:23:21 EDT 2025 Thu Aug 21 14:10:32 EDT 2025 Thu Oct 02 11:21:44 EDT 2025 Tue Oct 07 06:44:06 EDT 2025 Wed Feb 19 02:30:06 EST 2025 Thu Apr 24 23:10:43 EDT 2025 Wed Oct 01 00:20:34 EDT 2025 Fri Feb 21 02:42:14 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 6 |
| Keywords | Deep learning Image segmentation Image-guided interventions Image fusion |
| Language | English |
| License | Terms of use and reuse: academic research for non-commercial purposes, see here for full terms. https://www.springer.com/aam-terms-v1 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c474t-9c26358d1d0d4d51f70aabb693ce7e8c64d80e84c00a774121570533e6e3f16a3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| OpenAccessLink | https://proxy.k.utb.cz/login?url=https://www.ncbi.nlm.nih.gov/pmc/articles/7305971 |
| PMID | 32314228 |
| PQID | 2414770301 |
| PQPubID | 2043910 |
| PageCount | 10 |
| ParticipantIDs | unpaywall_primary_10_1007_s11548_020_02147_6 pubmedcentral_primary_oai_pubmedcentral_nih_gov_7305971 proquest_miscellaneous_2393041074 proquest_journals_2414770301 pubmed_primary_32314228 crossref_citationtrail_10_1007_s11548_020_02147_6 crossref_primary_10_1007_s11548_020_02147_6 springer_journals_10_1007_s11548_020_02147_6 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 2020-06-01 |
| PublicationDateYYYYMMDD | 2020-06-01 |
| PublicationDate_xml | – month: 06 year: 2020 text: 2020-06-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Cham |
| PublicationPlace_xml | – name: Cham – name: Germany – name: Heidelberg |
| PublicationSubtitle | A journal for interdisciplinary research, development and applications of image guided diagnosis and therapy |
| PublicationTitle | International journal for computer assisted radiology and surgery |
| PublicationTitleAbbrev | Int J CARS |
| PublicationTitleAlternate | Int J Comput Assist Radiol Surg |
| PublicationYear | 2020 |
| Publisher | Springer International Publishing Springer Nature B.V |
| Publisher_xml | – name: Springer International Publishing – name: Springer Nature B.V |
| References | He, Zhang, Ren, Sun (CR10) 2015; 37 Haaga (CR7) 2005; 15 CR3 Maintz, Viergever (CR19) 1998; 2 CR6 CR5 Abi-Jaoudeh, Kruecker, Kadoury, Kobeiter, Venkatesan, Levy, Wood (CR1) 2012; 35 CR8 CR18 CR9 CR16 CR15 Li, Chen, Qi, Dou, Fu, Heng (CR17) 2018; 37 CR13 CR24 CR23 CR11 CR22 CR21 CR20 Besl, McKay (CR2) 1992; 14 Herring, Dawant, Maurer, Muratore, Galloway, Fitzpatrick (CR12) 1998; 17 Billings, Boctor, Taylor (CR4) 2015; 10 Jones, Dixon, Collins, Walser, Nikolic (CR14) 2018; 29 2147_CR15 X Li (2147_CR17) 2018; 37 2147_CR16 J Maintz (2147_CR19) 1998; 2 2147_CR18 PJ Besl (2147_CR2) 1992; 14 JL Herring (2147_CR12) 1998; 17 2147_CR6 JR Haaga (2147_CR7) 2005; 15 2147_CR5 2147_CR9 AK Jones (2147_CR14) 2018; 29 2147_CR8 SD Billings (2147_CR4) 2015; 10 2147_CR20 N Abi-Jaoudeh (2147_CR1) 2012; 35 2147_CR21 2147_CR3 2147_CR11 2147_CR22 K He (2147_CR10) 2015; 37 2147_CR23 2147_CR13 2147_CR24 |
| References_xml | – volume: 37 start-page: 1904 issue: 9 year: 2015 end-page: 1916 ident: CR10 article-title: Spatial pyramid pooling in deep convolutional networks for visual recognition publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2015.2389824 – ident: CR21 – ident: CR22 – volume: 37 start-page: 2663 issue: 12 year: 2018 end-page: 2674 ident: CR17 article-title: H-denseunet: hybrid densely connected unet for liver and tumor segmentation from CT volumes publication-title: IEEE Trans Med Imag doi: 10.1109/TMI.2018.2845918 – ident: CR18 – volume: 17 start-page: 743 issue: 5 year: 1998 end-page: 752 ident: CR12 article-title: Surface-based registration of ct images to physical space for image-guided surgery of the spine: a sensitivity study publication-title: IEEE Transactions on Medical Imaging doi: 10.1109/42.736029 – volume: 35 start-page: 986 issue: 5 year: 2012 end-page: 998 ident: CR1 article-title: Multimodality image fusion-guided procedures: technique, accuracy, and applications publication-title: Cardio Vasc Interv Radiol doi: 10.1007/s00270-012-0446-5 – volume: 15 start-page: d116 issue: 4 year: 2005 end-page: d120 ident: CR7 article-title: Interventional ct: 30 years’ experience publication-title: Eur Radiol Suppl doi: 10.1007/s10406-005-0130-9 – ident: CR3 – ident: CR15 – ident: CR16 – ident: CR13 – ident: CR11 – ident: CR9 – ident: CR6 – volume: 14 start-page: 239 issue: 2 year: 1992 end-page: 256 ident: CR2 article-title: A method for registration of 3-d shapes publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/34.121791 – ident: CR5 – ident: CR8 – volume: 2 start-page: 1 issue: 1 year: 1998 end-page: 36 ident: CR19 article-title: A survey of medical image registration publication-title: Med Image Anal doi: 10.1016/S1361-8415(01)80026-8 – ident: CR24 – ident: CR23 – ident: CR20 – volume: 10 start-page: e0117688 issue: 3 year: 2015 ident: CR4 article-title: Iterative most-likely point registration (IMLP): a robust algorithm for computing optimal shape alignment publication-title: PloS One doi: 10.1371/journal.pone.0117688 – volume: 29 start-page: 518 issue: 4 year: 2018 ident: CR14 article-title: Best practice guidelines for ct-guided interventional procedures publication-title: J Vasc Interv Radiol doi: 10.1016/j.jvir.2017.10.021 – volume: 37 start-page: 2663 issue: 12 year: 2018 ident: 2147_CR17 publication-title: IEEE Trans Med Imag doi: 10.1109/TMI.2018.2845918 – ident: 2147_CR13 doi: 10.1109/CVPR.2017.243 – ident: 2147_CR16 – ident: 2147_CR21 doi: 10.1007/978-3-319-24574-4_28 – volume: 29 start-page: 518 issue: 4 year: 2018 ident: 2147_CR14 publication-title: J Vasc Interv Radiol doi: 10.1016/j.jvir.2017.10.021 – ident: 2147_CR18 doi: 10.1007/978-3-030-00934-2_94 – volume: 15 start-page: d116 issue: 4 year: 2005 ident: 2147_CR7 publication-title: Eur Radiol Suppl doi: 10.1007/s10406-005-0130-9 – ident: 2147_CR11 doi: 10.1109/CVPR.2016.90 – volume: 2 start-page: 1 issue: 1 year: 1998 ident: 2147_CR19 publication-title: Med Image Anal doi: 10.1016/S1361-8415(01)80026-8 – ident: 2147_CR5 doi: 10.1109/ISBI.2019.8759555 – ident: 2147_CR8 – ident: 2147_CR3 – ident: 2147_CR9 – ident: 2147_CR6 – volume: 14 start-page: 239 issue: 2 year: 1992 ident: 2147_CR2 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/34.121791 – volume: 17 start-page: 743 issue: 5 year: 1998 ident: 2147_CR12 publication-title: IEEE Transactions on Medical Imaging doi: 10.1109/42.736029 – volume: 10 start-page: e0117688 issue: 3 year: 2015 ident: 2147_CR4 publication-title: PloS One doi: 10.1371/journal.pone.0117688 – ident: 2147_CR24 – ident: 2147_CR22 doi: 10.1109/ISBI.2018.8363817 – volume: 35 start-page: 986 issue: 5 year: 2012 ident: 2147_CR1 publication-title: Cardio Vasc Interv Radiol doi: 10.1007/s00270-012-0446-5 – volume: 37 start-page: 1904 issue: 9 year: 2015 ident: 2147_CR10 publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2015.2389824 – ident: 2147_CR20 – ident: 2147_CR23 – ident: 2147_CR15 doi: 10.1007/978-1-4614-7657-3_19 |
| SSID | ssj0045735 |
| Score | 2.3155634 |
| Snippet | Purpose
Tumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement... Tumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement... PurposeTumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement... |
| SourceID | unpaywall pubmedcentral proquest pubmed crossref springer |
| SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 963 |
| SubjectTerms | Algorithms Computed tomography Computer Imaging Computer Science Computer vision Deep learning Diagnostic systems Health Informatics Image acquisition Image contrast Image processing Image registration Image segmentation Imaging Liver Machine learning Magnetic resonance imaging Medical imaging Medicine Medicine & Public Health Multiscale analysis Original Article Pattern Recognition and Graphics Placement Positron emission Radiology Surgery Tomography Tumors Vision |
| SummonAdditionalLinks | – databaseName: SpringerLINK - Czech Republic Consortium dbid: AGYKE link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3db9MwED9BKw14oDAGBAoK0t5WV_lw7OSxgo5q03iiUvcUObbTVZS0oo0Q_PX4UidtKarYsy92zj6fz7673wGcaxnQWPkByUWemQsKjYnQcUgyESUaK8B5VS7MzRc2GtOrSTSxSWGrOtq9dklWmnqb7IbWNcHrDuJ8ccIeQrvC22pBe_D59npYa2Aa8aqwph8znzDqezZZ5t-97B9IB1bmYbBk4zF9Ao_KYil-_RTz-c6hdNmBcc3OJhblW79cZ335-y-kx_vy-wyeWivVHWzE6jk80MUpdOoKEK5VCKdwcmNd8y9g-EnrpWuLUEwJno7KnWPUh7vS0-82xalwjZHs5iW-0ZFpOVOGaLYTdnkG48vh148jYms0EEk5XZNEIpqNWW3lKaoiP-eeEFnGklBqrmPJqIo9HVPpecJYmohlwTH9VzMd5j4T4UtoFYtCvwY3ChWVinMjJYLKSGRcBHkUKNNHoINEOeDXC5VKC2COdTTm6RZ6GacsNVOWVlOWMgcumm-WG_iOo9Tdev1Tu5VXqTFxKEe96DvwoWk2mxA9K6LQi9LQhEnoUYxtdeDVRlya4UJjQSPOmgN8T5AaAgT43m8pZncV0LfRvua-Z8bt1RKy_a1jXPQasfwPpt_cr_e38DioZBIforrQWv8o9Ttjl62z93Yb_gHfTisl priority: 102 providerName: Springer Nature |
| Title | Deep learning-based liver segmentation for fusion-guided intervention |
| URI | https://link.springer.com/article/10.1007/s11548-020-02147-6 https://www.ncbi.nlm.nih.gov/pubmed/32314228 https://www.proquest.com/docview/2414770301 https://www.proquest.com/docview/2393041074 https://pubmed.ncbi.nlm.nih.gov/PMC7305971 https://www.ncbi.nlm.nih.gov/pmc/articles/7305971 |
| UnpaywallVersion | submittedVersion |
| Volume | 15 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVLSH databaseName: SpringerLink Journals customDbUrl: mediaType: online eissn: 1861-6429 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0045735 issn: 1861-6429 databaseCode: AFBBN dateStart: 20060301 isFulltext: true providerName: Library Specific Holdings – providerCode: PRVAVX databaseName: SpringerLINK - Czech Republic Consortium customDbUrl: eissn: 1861-6429 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0045735 issn: 1861-6429 databaseCode: AGYKE dateStart: 20060101 isFulltext: true titleUrlDefault: http://link.springer.com providerName: Springer Nature – providerCode: PRVAVX databaseName: SpringerLink Journals (ICM) customDbUrl: eissn: 1861-6429 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0045735 issn: 1861-6429 databaseCode: U2A dateStart: 20060625 isFulltext: true titleUrlDefault: http://www.springerlink.com/journals/ providerName: Springer Nature |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3db9MwED-NVgL2ML4hMKog8cbcJY5jJ4_V6JhAmxCi0vYUObZTKtKsYo0m-OvxJU62Mmlib5F8-TjdXfyzffc7gPdGUZbokJJCFrldoLCESJNEJJdxarADXNDUwhyf8KMZ-3wan25B2NXCNEn7Kl-Mq3I5rhY_mtzK1VLtd3li-9YlLQi2C54hjy38HsBwdvJ1coYLq4SHhLOWgaC9pqkrlGnL5RCfE1wwIVOYIHxzMrqBMG8mSvanpdvwoK5W8velLMtrE9LhI_jWqdLmofwc1-t8rP78w_J4J10fw46Dp_6kHXoCW6Z6CveP3QH8M5h-NGblu1YTc4JzoPZLzO3wL8x86QqZKt9CYb-ocSeOzOuFtkKLa8mVz2F2OP1-cERcJwaimGBrkirkrLE21YFmOg4LEUiZ5zyNlBEmUZzpJDAJU0EgLZ5ExgqBRb6Gm6gIuYxewKA6r8wr8ONIM6WFsL4gmYplLiQtYqrtM6ihqfYg7EySKUdTjt0yyuyKYBnNmFkzZo0ZM-7Bh_6eVUvScav0bmfpzAXsRWaBDBP49ws9eNcP21DD8xNZmfPaykRpFDDMYPXgZesY_esii5ORTc0DseEyvQDSeG-OWIs3dN7OyB7sdc519Vm3abHXO-B_KP36buJv4CFt4gS3m3ZhsP5Vm7cWfa3zEQwnn86-TEdwb0YnIxd9fwHHsyqP |
| linkProvider | Unpaywall |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fT9swED4xkFb2MKADlq0bmbS31VJ-OHbyiDaqsrV9ohJvkWM7pVIJFW2E-O_nS52UqgjBsy92ks93Ptt33wH81DKgsfIDkos8MxsUGhOh45BkIko0VoDzqlyY4Yj1x_TvdXRtk8IWdbR7fSVZWep1sht61wS3O8jzxQl7B3tIYIWM-ePgvLa_NOJVWU0_Zj5h1PdsqszzfWwuR1s-5naoZHNf-gFaZTEXjw9iNnuyJPUO4aP1Jd3zFfhHsKOLNhzUdRpcq7ZteD-0F-if4OKP1nPXloqYEFzDlDvD2Ax3oSe3NhGpcI0r6-YlnqSRSTlVRmj6JDjyGMa9i6vffWIrKRBJOV2SRCLnjMFEeYqqyM-5J0SWsSSUmutYMqpiT8dUep4w_iAyTnBM0tVMh7nPRHgCu8VdoT-DG4WKSsW5wVJQGYmMiyCPAmX6CHSQKAf8-oem0tKMY7WLWbomSEYQUgNCWoGQMgd-Nc_MVyQbL0p3apxSq3CL1DgilKP18h340TQbVcH7D1Hou9LIhEnoUYxAdeB0BWszXGj8XJxMDvANwBsBpOHebCmmNxUdt7GRZldmxu3WU2P9Wi99RbeZPq_46C9v6_0MWv2r4SAdXI7-fYX9oJrxeHTUgd3lfam_GU9qmX2vFOc_vpoREw |
| linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dT9swED-NTuLjgW0MtgDbgsTbsMiHYyePCKjYB2gPq8Rb5NhOV6kL1dpo4r_nLnXSViC0Pftix747--y7-x3AsdURT00YsVKVBV5QeMqUTWNWqCSzVAEuaHJhrm_E1YB_vU1ul7L4m2j31iU5z2kglKZqdjox5eki8Y0sbUZXH8L8kkyswUtOQAko0YPorN2LeSKbEpthKkImeBi4tJmn-1g9mh7Zm4_DJjvf6RZs1NVE3f9V4_HS8dR_DdvOrvTP5oLwBl7YagdetTUbfKfCO7B-7Zzpb-HywtqJ78pGDBmdZ8YfU5yGP7XD3y4pqfLRrPXLml7V2LAeGSQaLQVK7sKgf_nz_Iq5qgpMc8lnLNOEP4P8MYHhJglLGShVFCKLtZU21YKbNLAp10Gg0DakRZWUsGuFjctQqHgPetVdZd-Dn8SGayMl8lVxnahCqqhMIoN9RDbKjAdhu6C5dpDjVPlinC_AkokJOTIhb5iQCw8-d99M5oAbz1IftnzKnfJNczRKuKSdLPTgqGtGtSFfiKrsXY00cRYHnKJRPXg3Z2s3XIw2LyGjeSBXGN4RECT3aks1-tVAc-N-iTc0HPekFY3Fbz03i5NOfP5h0vv_1_snWP9x0c-_f7n5dgCbUSPw9Ip0CL3Zn9p-QKNqVnxs9OYBV-wVTw |
| linkToUnpaywall | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS9xAEB_khNo-tLbWNq2WFPqme-Zjs5s8SqtIQSmlB_oUNrub8zAXj3qhtH99Z5JN9CpIfQvs5GOYmexvd2d-A_DJ6oinJoxYqcoCFyg8ZcqmMStUklnqABe0tTCnZ-Jkwr-eJ-drEPa1MG3Svi5m47qaj-vZZZtbuZjrgz5P7ABdEkEwLnjWRYLwewTrk7Nvhxe0sEpFyATvGAi66yhzhTJduRzhc0YLJmIKk0ysTkb3EOb9RMnhtPQZbDT1Qv3-parqzoR0_AK-96p0eShX42ZZjPWff1geH6XrJjx38NQ_7IZewpqtX8GTU3cAvwVHX6xd-K7VxJTRHGj8inI7_Bs7nbtCptpHKOyXDe3EsWkzMyg0u5Nc-Romx0c_Pp8w14mBaS75kmWaOGvQpiYw3CRhKQOlikJksbbSplpwkwY25ToIFOJJYqyQVORrhY3LUKh4G0b1dW3fgp_EhmsjJfqC4jpRhVRRmUQGnxHZKDMehL1Jcu1oyqlbRpXfEiyTGXM0Y96aMRce7A33LDqSjgeld3pL5y5gb3IEMlzS3y_04OMwjKFG5yeqttcNysRZHHDKYPXgTecYw-tixMnEpuaBXHGZQYBovFdH0OItnbczsgf7vXPdftZDWuwPDvgfSr97nPh7eBq1cULbTTswWv5s7C6ir2XxwcXbX4kgKBI |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+learning-based+liver+segmentation+for+fusion-guided+intervention&rft.jtitle=International+journal+for+computer+assisted+radiology+and+surgery&rft.au=Fang+Xi&rft.au=Xu%2C+Sheng&rft.au=Wood%2C+Bradford+J&rft.au=Pingkun%2C+Yan&rft.date=2020-06-01&rft.pub=Springer+Nature+B.V&rft.issn=1861-6410&rft.eissn=1861-6429&rft.volume=15&rft.issue=6&rft.spage=963&rft.epage=972&rft_id=info:doi/10.1007%2Fs11548-020-02147-6&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1861-6410&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1861-6410&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1861-6410&client=summon |