Weakly-supervised convolutional neural networks for multimodal image registration
•A method to infer voxel-level correspondence from higher-level anatomical labels.•Efficient and fully-automated registration for MR and ultrasound prostate images.•Validation experiments with 108 pairs of labelled interventional patient images.•Open-source implementation. One of the fundamental cha...
Saved in:
Published in | Medical image analysis Vol. 49; pp. 1 - 13 |
---|---|
Main Authors | , , , , , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Netherlands
Elsevier B.V
01.10.2018
Elsevier BV |
Subjects | |
Online Access | Get full text |
ISSN | 1361-8415 1361-8423 1361-8423 |
DOI | 10.1016/j.media.2018.07.002 |
Cover
Abstract | •A method to infer voxel-level correspondence from higher-level anatomical labels.•Efficient and fully-automated registration for MR and ultrasound prostate images.•Validation experiments with 108 pairs of labelled interventional patient images.•Open-source implementation.
One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.
[Display omitted] |
---|---|
AbstractList | One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels. One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels. •A method to infer voxel-level correspondence from higher-level anatomical labels.•Efficient and fully-automated registration for MR and ultrasound prostate images.•Validation experiments with 108 pairs of labelled interventional patient images.•Open-source implementation. One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels. [Display omitted] One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels. |
Author | Bonmati, Ester Barratt, Dean C. Vercauteren, Tom Ourselin, Sébastien Noble, J. Alison Bandula, Steven Modat, Marc Ghavami, Nooshin Moore, Caroline M. Li, Wenqi Wang, Guotai Gibson, Eli Hu, Yipeng Emberton, Mark |
AuthorAffiliation | a Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK c Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK e Division of Surgery and Interventional Science, University College London, London, UK d Centre for Medical Imaging, University College London, London, UK b Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK |
AuthorAffiliation_xml | – name: e Division of Surgery and Interventional Science, University College London, London, UK – name: d Centre for Medical Imaging, University College London, London, UK – name: a Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – name: b Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK – name: c Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK |
Author_xml | – sequence: 1 givenname: Yipeng orcidid: 0000-0003-4902-0486 surname: Hu fullname: Hu, Yipeng email: yipeng.hu@ucl.ac.uk organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – sequence: 2 givenname: Marc orcidid: 0000-0002-5277-8530 surname: Modat fullname: Modat, Marc organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – sequence: 3 givenname: Eli orcidid: 0000-0001-9207-7280 surname: Gibson fullname: Gibson, Eli organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – sequence: 4 givenname: Wenqi orcidid: 0000-0002-7432-7386 surname: Li fullname: Li, Wenqi organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – sequence: 5 givenname: Nooshin surname: Ghavami fullname: Ghavami, Nooshin organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – sequence: 6 givenname: Ester surname: Bonmati fullname: Bonmati, Ester organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – sequence: 7 givenname: Guotai orcidid: 0000-0002-8632-158X surname: Wang fullname: Wang, Guotai organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – sequence: 8 givenname: Steven surname: Bandula fullname: Bandula, Steven organization: Centre for Medical Imaging, University College London, London, UK – sequence: 9 givenname: Caroline M. surname: Moore fullname: Moore, Caroline M. organization: Division of Surgery and Interventional Science, University College London, London, UK – sequence: 10 givenname: Mark orcidid: 0000-0003-4230-0338 surname: Emberton fullname: Emberton, Mark organization: Division of Surgery and Interventional Science, University College London, London, UK – sequence: 11 givenname: Sébastien surname: Ourselin fullname: Ourselin, Sébastien organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – sequence: 12 givenname: J. Alison orcidid: 0000-0002-3060-3772 surname: Noble fullname: Noble, J. Alison organization: Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK – sequence: 13 givenname: Dean C. surname: Barratt fullname: Barratt, Dean C. organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK – sequence: 14 givenname: Tom orcidid: 0000-0003-1794-0456 surname: Vercauteren fullname: Vercauteren, Tom organization: Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/30007253$$D View this record in MEDLINE/PubMed |
BookMark | eNqFkV1rFDEUhoNU7If-AkEGvPFmpvmYJDsXClKsCgUpKF6GmeRkzTaTrMnMSv99s7t1sb2wV-eQPO_hPec9RUchBkDoNcENwUScr5oRjOsbismiwbLBmD5DJ4QJUi9ayo4OPeHH6DTnFcZYti1-gY7ZtqWcnaDrn9Df-Ns6z2tIG5fBVDqGTfTz5GLofRVgTrsy_YnpJlc2pmqc_eTGaMq7G_slVAmWLk-p32peoue29xle3dcz9OPy0_eLL_XVt89fLz5e1ZpzNtVDyyVdEGG1NUwazsVgqAUruell17ZgtJDdIKwgZOgoaN5xawljQgtgA2dn6MN-7noeyiE0hGLAq3UqltKtir1TD3-C-6WWcaOEbCknuAx4dz8gxd8z5EmNLmvwvg8Q56wolphyzFtZ0LeP0FWcUzlPoQgVvGxERaHe_OvoYOXvtQvA9oBOMecE9oAQrLaZqpXaZaq2mSosVcm0qLpHKu2m3anLWs4_oX2_10JJYuMgqawdBF3ABHpSJrr_6u8A6XC_7w |
CitedBy_id | crossref_primary_10_1016_j_compeleceng_2020_106767 crossref_primary_10_1016_j_media_2021_102139 crossref_primary_10_1088_1361_6560_ad67a6 crossref_primary_10_1016_j_media_2023_102935 crossref_primary_10_1186_s41747_019_0120_7 crossref_primary_10_1016_j_urolonc_2022_12_002 crossref_primary_10_1088_1361_6560_ad4e01 crossref_primary_10_1117_1_JMI_7_3_031503 crossref_primary_10_1109_TMI_2024_3400899 crossref_primary_10_1109_TBME_2022_3173182 crossref_primary_10_1080_17434440_2025_2477601 crossref_primary_10_1088_1361_6560_adaacd crossref_primary_10_1007_s00521_021_06243_9 crossref_primary_10_1117_1_JMI_6_4_044008 crossref_primary_10_1109_ACCESS_2019_2957233 crossref_primary_10_1016_j_media_2019_101545 crossref_primary_10_1007_s11517_020_02265_y crossref_primary_10_1016_j_jenvman_2021_113344 crossref_primary_10_1109_JBHI_2021_3095409 crossref_primary_10_1016_j_cpet_2021_06_001 crossref_primary_10_1109_TIP_2021_3130533 crossref_primary_10_1155_2020_8279342 crossref_primary_10_1016_j_inffus_2021_02_012 crossref_primary_10_2139_ssrn_4071358 crossref_primary_10_1016_j_compbiomed_2024_108673 crossref_primary_10_1117_1_JMI_6_3_035003 crossref_primary_10_1016_j_jdent_2024_105443 crossref_primary_10_1615_CritRevBiomedEng_2023050566 crossref_primary_10_1002_mp_17235 crossref_primary_10_1016_j_inffus_2023_102061 crossref_primary_10_1016_j_neucom_2021_08_097 crossref_primary_10_1155_2020_2684851 crossref_primary_10_3389_fphy_2020_00047 crossref_primary_10_1007_s10489_022_03659_1 crossref_primary_10_1109_TMI_2022_3164088 crossref_primary_10_1088_1361_6560_accdb1 crossref_primary_10_1109_ACCESS_2019_2938858 crossref_primary_10_1109_TMI_2019_2897538 crossref_primary_10_1007_s00530_021_00884_5 crossref_primary_10_1016_j_neucom_2020_04_122 crossref_primary_10_3390_s22031107 crossref_primary_10_3390_brainsci12121601 crossref_primary_10_1007_s12204_021_2383_4 crossref_primary_10_1109_TPAMI_2022_3225418 crossref_primary_10_1002_mp_16291 crossref_primary_10_3389_fonc_2022_1047215 crossref_primary_10_1016_j_bspc_2022_104294 crossref_primary_10_23736_S1824_4785_19_03213_8 crossref_primary_10_1109_TMI_2022_3187873 crossref_primary_10_1177_17562872221128791 crossref_primary_10_1109_TMI_2020_2972616 crossref_primary_10_1016_j_ejrad_2023_110887 crossref_primary_10_32604_csse_2021_014578 crossref_primary_10_1007_s00521_023_08454_8 crossref_primary_10_1016_j_media_2021_102231 crossref_primary_10_1038_s41598_021_86848_1 crossref_primary_10_1007_s11684_020_0770_0 crossref_primary_10_1109_ACCESS_2021_3120306 crossref_primary_10_1016_j_zemedi_2023_05_003 crossref_primary_10_1016_j_xops_2024_100664 crossref_primary_10_1038_s41598_022_20589_7 crossref_primary_10_1088_1361_6560_ad0d8a crossref_primary_10_1016_j_mri_2019_05_037 crossref_primary_10_1016_j_neucom_2021_04_042 crossref_primary_10_3389_fnbot_2020_559366 crossref_primary_10_1016_j_media_2023_102962 crossref_primary_10_1109_TMI_2022_3218147 crossref_primary_10_1016_j_media_2023_102840 crossref_primary_10_1007_s11548_021_02359_4 crossref_primary_10_1109_TMI_2019_2953788 crossref_primary_10_1038_s41585_019_0193_3 crossref_primary_10_3389_fmed_2023_1114571 crossref_primary_10_1016_j_media_2022_102612 crossref_primary_10_1016_j_patrec_2021_09_011 crossref_primary_10_1109_TIP_2022_3201476 crossref_primary_10_1016_j_bspc_2023_104594 crossref_primary_10_1016_j_cmpb_2021_106374 crossref_primary_10_1109_JBHI_2020_3016699 crossref_primary_10_1117_1_JEI_33_5_053055 crossref_primary_10_1109_TMRB_2023_3265708 crossref_primary_10_3390_diagnostics12061489 crossref_primary_10_3390_electronics12010097 crossref_primary_10_1016_j_neuroimage_2023_120303 crossref_primary_10_1007_s10462_024_10712_7 crossref_primary_10_1007_s13534_024_00428_6 crossref_primary_10_1016_j_neuroimage_2019_116324 crossref_primary_10_3389_fphys_2022_1021400 crossref_primary_10_1016_j_media_2020_101878 crossref_primary_10_1007_s11548_019_02089_8 crossref_primary_10_1002_mp_15011 crossref_primary_10_1109_TNNLS_2020_2995319 crossref_primary_10_21105_joss_02705 crossref_primary_10_1016_j_media_2020_101638 crossref_primary_10_1007_s00261_024_04423_4 crossref_primary_10_1002_mp_13994 crossref_primary_10_1016_j_cmpb_2021_106261 crossref_primary_10_1016_j_cmpb_2024_108578 crossref_primary_10_1109_TRPMS_2021_3107454 crossref_primary_10_1007_s11340_021_00758_x crossref_primary_10_1016_j_neuroimage_2025_121104 crossref_primary_10_3390_diagnostics12020289 crossref_primary_10_1007_s10278_024_01244_1 crossref_primary_10_1016_j_brachy_2022_11_011 crossref_primary_10_1007_s10489_024_05585_w crossref_primary_10_1109_TMI_2021_3137280 crossref_primary_10_1007_s11548_018_1888_2 crossref_primary_10_1016_j_ijmedinf_2023_105279 crossref_primary_10_1109_TMI_2020_3007520 crossref_primary_10_1016_j_compmedimag_2020_101769 crossref_primary_10_1016_j_compmedimag_2023_102260 crossref_primary_10_1109_LSP_2024_3388954 crossref_primary_10_1002_mp_14065 crossref_primary_10_3389_fnins_2020_620235 crossref_primary_10_1002_mp_17696 crossref_primary_10_1016_j_bspc_2022_103965 crossref_primary_10_1016_j_compmedimag_2021_101931 crossref_primary_10_1016_j_compbiomed_2023_107434 crossref_primary_10_1109_RBME_2022_3183852 crossref_primary_10_1016_j_media_2020_101845 crossref_primary_10_1016_j_bspc_2024_106172 crossref_primary_10_1016_j_cmpb_2022_107025 crossref_primary_10_1109_TMI_2022_3141013 crossref_primary_10_1364_BOE_393178 crossref_primary_10_3390_s23063208 crossref_primary_10_1016_j_media_2019_07_006 crossref_primary_10_1016_j_mri_2023_01_013 crossref_primary_10_1016_j_compbiomed_2022_105780 crossref_primary_10_1016_j_imu_2024_101540 crossref_primary_10_1080_19479832_2019_1707720 crossref_primary_10_1007_s43154_020_00042_1 crossref_primary_10_1088_1361_6560_ad7fc7 crossref_primary_10_1038_s41598_023_40133_5 crossref_primary_10_1109_ACCESS_2025_3530242 crossref_primary_10_3390_rs16162880 crossref_primary_10_1007_s11548_024_03084_4 crossref_primary_10_1002_ima_22801 crossref_primary_10_1016_j_neunet_2022_04_011 crossref_primary_10_1088_1361_6560_ac72ef crossref_primary_10_1109_TMI_2021_3128408 crossref_primary_10_1088_1361_6560_ad4c4f crossref_primary_10_1016_j_media_2023_102786 crossref_primary_10_1080_09500340_2021_1939897 crossref_primary_10_1007_s10334_019_00782_y crossref_primary_10_2139_ssrn_4094742 crossref_primary_10_1016_j_neunet_2020_01_023 crossref_primary_10_1109_ACCESS_2020_3047829 crossref_primary_10_1109_TPAMI_2023_3289667 crossref_primary_10_1016_j_media_2020_101822 crossref_primary_10_1016_j_bspc_2024_107007 crossref_primary_10_3390_app10031171 crossref_primary_10_1109_TMI_2023_3244333 crossref_primary_10_1080_10255842_2024_2372637 crossref_primary_10_3390_jmse11040793 crossref_primary_10_1007_s11548_019_02068_z crossref_primary_10_3389_fcvm_2021_785523 crossref_primary_10_1016_j_media_2020_101817 crossref_primary_10_1186_s42492_024_00173_8 crossref_primary_10_1109_TASE_2021_3087868 crossref_primary_10_1016_j_patcog_2024_111338 crossref_primary_10_1016_j_techfore_2021_120745 crossref_primary_10_1088_1361_6560_ac195e crossref_primary_10_1016_j_compbiomed_2022_105799 crossref_primary_10_3389_fradi_2023_1153784 crossref_primary_10_1088_1361_6560_ab843e crossref_primary_10_1016_j_cag_2020_07_012 crossref_primary_10_3390_s21217112 crossref_primary_10_1109_TMI_2019_2897112 crossref_primary_10_1016_j_neuroimage_2022_119444 crossref_primary_10_1002_mp_15420 crossref_primary_10_1109_TMI_2021_3116879 crossref_primary_10_1016_j_ultrasmedbio_2024_10_016 crossref_primary_10_1200_EDBK_320951 crossref_primary_10_1245_s10434_024_15896_4 crossref_primary_10_1049_iet_ipr_2019_0032 crossref_primary_10_1109_JPROC_2019_2932116 crossref_primary_10_1016_j_compbiomed_2025_109719 crossref_primary_10_1016_j_ejrh_2024_102064 crossref_primary_10_1016_j_media_2021_102292 crossref_primary_10_1002_mrm_27852 crossref_primary_10_1109_JSEN_2024_3402539 crossref_primary_10_1109_JPROC_2019_2946993 crossref_primary_10_1002_ett_4017 crossref_primary_10_1109_TMI_2022_3158065 crossref_primary_10_1016_j_media_2024_103385 crossref_primary_10_1109_JBHI_2022_3149114 crossref_primary_10_1007_s13246_024_01408_x crossref_primary_10_1016_j_compmedimag_2023_102322 crossref_primary_10_1088_1361_6560_ab8cd6 crossref_primary_10_1016_j_sigpro_2025_109924 crossref_primary_10_1371_journal_pone_0282110 crossref_primary_10_1109_JBHI_2023_3337942 crossref_primary_10_1016_j_compmedimag_2022_102140 crossref_primary_10_1007_s11548_024_03258_0 crossref_primary_10_1016_j_autcon_2023_105213 crossref_primary_10_1016_j_phro_2023_100498 crossref_primary_10_1109_JBHI_2024_3350166 crossref_primary_10_1016_j_compmedimag_2023_102204 crossref_primary_10_1016_j_media_2022_102444 crossref_primary_10_1016_j_media_2023_103038 crossref_primary_10_1109_ACCESS_2025_3535219 crossref_primary_10_3390_diagnostics11020354 crossref_primary_10_1016_j_ipm_2022_103113 crossref_primary_10_1016_j_media_2018_11_010 crossref_primary_10_1007_s11517_022_02673_2 crossref_primary_10_1109_TMI_2022_3170879 crossref_primary_10_3390_s21010220 crossref_primary_10_1002_mp_14901 crossref_primary_10_1016_j_compbiomed_2023_106612 crossref_primary_10_1088_1361_6560_ac176a crossref_primary_10_1002_acm2_14177 crossref_primary_10_1038_s41598_023_33781_0 crossref_primary_10_1007_s00371_022_02435_z crossref_primary_10_1364_BOE_458182 crossref_primary_10_1016_j_jnlest_2023_100205 crossref_primary_10_1016_j_bspc_2025_107597 crossref_primary_10_32604_csse_2022_020810 crossref_primary_10_3390_app132413298 crossref_primary_10_1109_TMI_2021_3139507 crossref_primary_10_1007_s00138_020_01060_x crossref_primary_10_1177_00033197231187063 crossref_primary_10_1109_TMI_2023_3288046 crossref_primary_10_3390_bioengineering9080369 crossref_primary_10_1016_j_ultrasmedbio_2024_10_005 crossref_primary_10_3390_bioengineering10050562 crossref_primary_10_1088_1361_6560_acb197 crossref_primary_10_1109_TMI_2022_3168786 crossref_primary_10_1016_j_bspc_2023_104958 crossref_primary_10_2478_acss_2020_0005 crossref_primary_10_1016_j_media_2021_102041 crossref_primary_10_3390_e22060687 crossref_primary_10_1016_j_compbiomed_2023_107150 crossref_primary_10_1016_j_media_2024_103351 |
Cites_doi | 10.1109/5.726791 10.1148/radiol.2482082516 10.1109/IROS.2017.8206462 10.1109/TMI.2016.2521800 10.1007/978-3-642-35289-8_19 10.1016/j.juro.2013.12.007 10.1609/aaai.v31i1.11230 10.1109/TMI.2016.2610583 10.1007/978-3-319-67558-9_24 10.1117/12.2007580 10.1001/jama.2014.17942 10.1007/978-3-319-66182-7_83 10.1007/s00330-015-4015-6 10.1016/j.eururo.2014.10.026 10.1109/TPAMI.2013.50 10.1016/j.media.2016.06.030 10.1016/j.cmpb.2018.01.025 10.1109/CVPR.2016.308 10.1016/j.media.2015.10.006 10.1016/j.juro.2011.05.078 10.1109/42.796284 10.1088/0031-9155/46/3/201 10.1109/TIP.2009.2025006 10.1016/j.cmpb.2009.09.002 10.5244/C.31.10 10.1109/TMI.2011.2158235 10.1109/TMI.2015.2485299 10.1007/978-3-319-66182-7_35 10.1117/12.2293300 10.1016/j.juro.2017.02.1016 10.1016/j.eururo.2010.12.009 10.1016/j.media.2016.06.015 10.1109/TMI.2018.2791721 10.1109/CVPR.2017.179 10.1109/TMI.2017.2703150 10.1016/j.neuroimage.2017.07.008 10.1097/RLI.0000000000000115 10.1109/TBME.2016.2582734 10.1109/CVPR.2017.243 10.2967/jnumed.114.141705 10.1016/j.media.2010.11.003 10.1109/TMI.2014.2375207 10.1109/TMI.2015.2443978 10.1016/j.media.2017.05.001 10.1118/1.4917481 |
ContentType | Journal Article |
Copyright | 2018 The Authors Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved. Copyright Elsevier BV Oct 2018 |
Copyright_xml | – notice: 2018 The Authors – notice: Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved. – notice: Copyright Elsevier BV Oct 2018 |
DBID | 6I. AAFTH AAYXX CITATION NPM 7QO 8FD FR3 K9. NAPCQ P64 7X8 5PM |
DOI | 10.1016/j.media.2018.07.002 |
DatabaseName | ScienceDirect Open Access Titles Elsevier:ScienceDirect:Open Access CrossRef PubMed Biotechnology Research Abstracts Technology Research Database Engineering Research Database ProQuest Health & Medical Complete (Alumni) Nursing & Allied Health Premium Biotechnology and BioEngineering Abstracts MEDLINE - Academic PubMed Central (Full Participant titles) |
DatabaseTitle | CrossRef PubMed ProQuest Health & Medical Complete (Alumni) Nursing & Allied Health Premium Engineering Research Database Biotechnology Research Abstracts Technology Research Database Biotechnology and BioEngineering Abstracts MEDLINE - Academic |
DatabaseTitleList | PubMed MEDLINE - Academic ProQuest Health & Medical Complete (Alumni) |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine Engineering |
EISSN | 1361-8423 |
EndPage | 13 |
ExternalDocumentID | PMC6742510 30007253 10_1016_j_media_2018_07_002 S1361841518301051 |
Genre | Research Support, Non-U.S. Gov't Journal Article |
GrantInformation_xml | – fundername: Wellcome Trust grantid: WT101957 – fundername: Cancer Research UK grantid: C28070/A19985 – fundername: Wellcome Trust grantid: 203145Z/16/Z |
GroupedDBID | --- --K --M .~1 0R~ 1B1 1~. 1~5 29M 4.4 457 4G. 53G 5GY 5VS 6I. 7-5 71M 8P~ AACTN AAEDT AAEDW AAFTH AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABBQC ABJNI ABLVK ABMAC ABMZM ABXDB ABYKQ ACDAQ ACGFS ACIUM ACIWK ACNNM ACPRK ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADTZH AEBSH AECPX AEKER AENEX AFKWA AFRAH AFTJW AFXIZ AGHFR AGUBO AGYEJ AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV AJRQY ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ ANZVX AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC BNPGV C45 CAG COF CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HVGLF HX~ HZ~ IHE J1W JJJVA KOM LCYCR M41 MO0 N9A O-L O9- OAUVE OVD OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SDF SDG SDP SEL SES SEW SPC SPCBC SSH SST SSV SSZ T5K TEORI UHS ~G- AATTM AAXKI AAYWO AAYXX ABWVN ACIEU ACRPL ACVFH ADCNI ADNMO ADVLN AEIPS AEUPX AFJKZ AFPUW AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION NPM 7QO 8FD EFKBS FR3 K9. NAPCQ P64 7X8 5PM |
ID | FETCH-LOGICAL-c553t-b4572816fcfd37d556bd2fef75da7944edc679b6f611b92ec595ff1336c6e3b53 |
IEDL.DBID | AIKHN |
ISSN | 1361-8415 1361-8423 |
IngestDate | Thu Aug 21 14:12:49 EDT 2025 Fri Sep 05 14:39:36 EDT 2025 Wed Aug 13 09:48:40 EDT 2025 Wed Feb 19 02:35:43 EST 2025 Tue Jul 01 02:49:27 EDT 2025 Thu Apr 24 23:04:06 EDT 2025 Fri Feb 23 02:28:18 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Image-guided intervention Weakly-supervised learning Prostate cancer Convolutional neural network Medical image registration |
Language | English |
License | This is an open access article under the CC BY license. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved. This is an open access article under the CC BY license. (http://creativecommons.org/licenses/by/4.0/) |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c553t-b4572816fcfd37d556bd2fef75da7944edc679b6f611b92ec595ff1336c6e3b53 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0003-1794-0456 0000-0001-9207-7280 0000-0003-4902-0486 0000-0002-8632-158X 0000-0002-7432-7386 0000-0003-4230-0338 0000-0002-5277-8530 0000-0002-3060-3772 |
OpenAccessLink | https://www.sciencedirect.com/science/article/pii/S1361841518301051 |
PMID | 30007253 |
PQID | 2126555326 |
PQPubID | 2045428 |
PageCount | 13 |
ParticipantIDs | pubmedcentral_primary_oai_pubmedcentral_nih_gov_6742510 proquest_miscellaneous_2070250547 proquest_journals_2126555326 pubmed_primary_30007253 crossref_primary_10_1016_j_media_2018_07_002 crossref_citationtrail_10_1016_j_media_2018_07_002 elsevier_sciencedirect_doi_10_1016_j_media_2018_07_002 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2018-10-01 |
PublicationDateYYYYMMDD | 2018-10-01 |
PublicationDate_xml | – month: 10 year: 2018 text: 2018-10-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | Netherlands |
PublicationPlace_xml | – name: Netherlands – name: Amsterdam |
PublicationTitle | Medical image analysis |
PublicationTitleAlternate | Med Image Anal |
PublicationYear | 2018 |
Publisher | Elsevier B.V Elsevier BV |
Publisher_xml | – name: Elsevier B.V – name: Elsevier BV |
References | Hu, Kasivisvanathan, Simmons, Clarkson, Thompson, Shah, Ahmed, Punwani, Hawkes, Emberton, Moore, Barratt (bib0025) 2017; 64 Sun, Yuan, Qiu, Rajchl, Romagnoli, Fenster (bib0057) 2015 Dosovitskiy, Fischery, Ilg, Hausser, Hazirbas, Golkov, Smagt, Cremers, Brox (bib0008) 2015 Wilson, Kurhanewicz (bib0069) 2014; 55 Gibson, E., Giganti, F., Hu, Y., Bonmati, E., Bandula, S., Gurusamy, K., Davidson, B.R., Pereira, S.P., Clarkson, M.J., Barratt, D.C., 2017a. Towards image-guided pancreas and biliary endoscopy: Automatic multi-organ segmentation on abdominal CT with dense dilated networks, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Miao, Wang, Liao (bib0038) 2016; 35 Khallaghi, Sanchez, Rasoulian, Nouranian, Romagnoli, Abdi, Chang, Black, Goldenberg, Morris, Spadinger, Fenster, Ward, Fels, Abolmaesumi (bib0030) 2015; 34 Wu, Kim, Wang, Gao, Liao, Shen (bib0071) 2013; 16 Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T., 2017. FlowNet 2.0: evolution of optical flow estimation with deep networks. CVPR. LeCun, Bengio, Hinton (bib0034) 2015 Ronneberger, Fischer, Brox (bib0047) 2015; 2015 (bib0053) 2015 van de Ven, W.J.M., Hu, Y., Barentsz, J.O., Karssemeijer, N., Barratt, D., Huisman, H.J., 2013. Surface-based prostate registration with biomechanical regularization, in: SPIE Medical Imaging. p. 86711R–86711R. Yu, Yang, Chen, Qin, Heng (bib0074) 2017 Siddiqui, Rais-Bahrami, Turkbey, George, Rothwax, Shakir, Okoro, Raskolnikov, Parnes, Linehan, Merino, Simon, Choyke, Wood, Pinto (bib0051) 2015; 313 van de Ven, Hu, Barentsz, Karssemeijer, Barratt, Huisman (bib0061) 2015; 42 Modat, Ridgway, Taylor, Lehmann, Barnes, Hawkes, Fox, Ourselin (bib0040) 2010; 98 Viergever, Maintz, Klein, Murphy, Staring, Pluim (bib0064) 2016; 33 Goodfellow, Bengio, Courville (bib0018) 2016 Noble (bib0041) 2016; 33 Wang, Cheng, Ni, Lin, Qin, Luo, Xu, Xie, Heng (bib0067) 2016; 35 . Hu, Modat, Gibson, Ghavami, Bonmati, Moore, Emberton, Noble, Barratt, Vercauteren (bib0026) 2018; 2018 Jaderberg (bib0029) 2015 Simonovsky, Gutiérrez-Becker, Mateus, Navab, Komodakis (bib0052) 2016 Lawrence, S., Burns, I., Back, A., Tsoi, A.C., Giles, C.L., 2012. Neural network classification and prior class probabilities. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 7700 LECTU, 295–309. Panagiotaki, Chan, Dikaios, Ahmed, O'Callaghan, Freeman, Atkinson, Punwani, Hawkes, Alexander (bib0042) 2015; 50 Wang, G., Li, W., Zuluaga, M.A., Pratt, R., Patel, P.A., Aertsen, M., Doel, T., David, A.L., Deprest, J., Ourselin, S., Vercauteren, T., 2017. Interactive medical image segmentation using deep learning with image-specific fine-tuning. arXiv Prepr. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., others, 2016. Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv Prepr. Pinto, Chung, Rastinehad, Baccala, Kruecker, Benjamin, Xu, Yan, Kadoury, Chua, Locklin, Turkbey, Shih, Gates, Buckner, Bratslavsky, Linehan, Glossop, Choyke, Wood (bib0044) 2011; 186 Yu, Harley, Derpanis (bib0073) 2016 Bengio, Courville, Vincent (bib0002) 2013; 35 Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, Rabinovich (bib0058) 2015 Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z., 2016. Rethinking the inception architecture for computer vision, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2818–2826. LeCun, Bottou, Bengio, Haffner (bib0035) 1998; 86 Elkan (bib0010) 2001 Hill, Batchelor, Holden, Hawkes (bib0020) 2001; 46 Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L., 2016. Densely connected convolutional networks. arXiv Prepr. Glorot, Bengio (bib0017) 2010; 9 Liao, R., Miao, S., de Tournemire, P., Grbic, S., Kamen, A., Mansi, T., Comaniciu, D., 2017. An Artificial Agent for Robust Image Registration., in: AAAI. pp. 4168–4175. Rueckert, Sonoda, Hayes, Hill, Leach, Hawkes (bib0048) 1999; 18 Zöllei, Fisher, Wells (bib0075) 2003; 18 Wang, Ni, Qin, Xu, Xie, Heng (bib0068) 2016; 6 Gibson, E., Li, W., Sudre, C., Fidon, L., Shakir, D., Wang, G., Eaton-Rosen, Z., Gray, R., Doel, T., Hu, Y., others, 2017b. NiftyNet: a deep-learning platform for medical imaging. arXiv Prepr. Schnabel, Rueckert, Quist, Blackall, Castellano-Smith, Hartkens, Penney, Hall, Liu, Truwit, Gerritsen, Hill, Hawkes (bib0049) 2001 Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., Hinton, G., 2017. Regularizing neural networks by penalizing confident output distributions. arXiv Prepr. Dou, Yu, Chen, Jin, Yang, Qin, Heng (bib0009) 2017; 41 Ghavami, N., Hu, Y., Bonmati, E., Rodell, R., Gibson, E., Moore, C.M., Barratt, D.C., 2018. Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks, in: SPIE medical imaging. Hu, Ahmed, Taylor, Allen, Emberton, Hawkes, Barratt (bib0022) 2012; 16 Sokooti, de Vos, Berendsen, Lelieveldt, Išgum, Staring (bib0055) 2017 Lee, C.-Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z., 2015. Deeply-supervised nets, in: Artificial Intelligence and Statistics. pp. 562–570. Rohé, Datar, Heimann, Sermesant, Pennec (bib0046) 2017 Garcia-Peraza-Herrera, L.C., Li, W., Fidon, L., Gruijthuijsen, C., Devreker, A., Attilakos, G., Deprest, J., Poorten, E. Vander, Stoyanov, D., Vercauteren, T., others, 2017. Toolnet: holistically-nested real-time segmentation of robotic surgical tools. arXiv Prepr. Valerio, Donaldson, Emberton, Ehdaie, Hadaschik, Marks, Mozer, Rastinehad, Ahmed (bib0060) 2015; 68 Wojna, Z., Ferrari, V., Guadarrama, S., Silberman, N., Chen, L.-C., Fathi, A., Uijlings, J., 2017. The Devil is in the Decoder. arXiv Prepr. Vargas, Hötker, Goldman, Moskowitz, Gondo, Matsumoto, Ehdaie, Woo, Fine, Reuter, Sala, Hricak (bib0063) 2016; 26 Kumar, Dass (bib0032) 2009; 18 Hu, Carter, Ahmed, Emberton, Allen, Hawkes, Barratt (bib0023) 2011; 30 Hu, Y., 2013. Registration of magnetic resonance and ultrasound images for guiding prostate cancer interventions. UCL (University College London). Sudre, Li, Vercauteren, Ourselin, Jorge Cardoso (bib0056) 2017 Cao, X., Yang, J., Zhang, J., Nie, D., Kim, M., Wang, Q., Shen, D., 2017. Deformable image registration based on similarity-steered CNN regression, in: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). pp. 300–308. de Vos, Berendsen, Viergever, Staring, Išgum (bib0005) 2017; 10553 (bib0054) 2014 Donaldson, Hamid, Barratt, Hu, Rodell, Villarini, Bonmati, Martin, Hawkes, McCartan (bib0007) 2017; 197 Krebs, Mansi, Delingette, Zhang, Ghesu, Miao, Maier, Ayache, Liao, Kamen (bib0031) 2017 He, Zhang, Ren, Sun (bib0019) 2016 Dickinson, Ahmed, Allen, Barentsz, Carey, Futterer, Heijmink, Hoskin, Kirkham, Padhani, Persad, Puech, Punwani, Sohaib, Tombal, Villers, Van Der Meulen, Emberton (bib0006) 2011; 59 Rastinehad, Turkbey, Salami, Yaskiv, George, Fakhoury, Beecher, Vira, Kavoussi, Siegel, Villani, Ben-Levi (bib0045) 2014; 191 De Silva, Cool, Yuan, Romagnoli, Samarabandu, Fenster, Ward (bib0004) 2017; 36 Milletari, Navab, Ahmadi (bib0039) 2016 Shi, Zhuang, Pizarro, Bai, Wang, Tung, Edwards, Rueckert (bib0050) 2012; 15 Hu, Gibson, Ahmed, Moore, Emberton, Barratt (bib0024) 2015; 26 Vishnevskiy, Gass, Szekely, Tanner, Goksel (bib0065) 2017; 36 Fischer, Modersitzki (bib0012) 2004 Halpern (bib0011) 2008; 248 Yang, Kwitt, Styner, Niethammer (bib0072) 2017; 158 Bengio (10.1016/j.media.2018.07.002_bib0002) 2013; 35 10.1016/j.media.2018.07.002_bib0037 10.1016/j.media.2018.07.002_bib0036 Yu (10.1016/j.media.2018.07.002_bib0074) 2017 Krebs (10.1016/j.media.2018.07.002_bib0031) 2017 Halpern (10.1016/j.media.2018.07.002_bib0011) 2008; 248 10.1016/j.media.2018.07.002_bib0033 Dosovitskiy (10.1016/j.media.2018.07.002_bib0008) 2015 Rastinehad (10.1016/j.media.2018.07.002_bib0045) 2014; 191 de Vos (10.1016/j.media.2018.07.002_bib0005) 2017; 10553 Wilson (10.1016/j.media.2018.07.002_bib0069) 2014; 55 Elkan (10.1016/j.media.2018.07.002_bib0010) Zöllei (10.1016/j.media.2018.07.002_bib0075) 2003; 18 Kumar (10.1016/j.media.2018.07.002_bib0032) 2009; 18 He (10.1016/j.media.2018.07.002_sbref0013) 2016 Rohé (10.1016/j.media.2018.07.002_bib0046) 2017 Vargas (10.1016/j.media.2018.07.002_bib0063) 2016; 26 Hill (10.1016/j.media.2018.07.002_bib0020) 2001; 46 Khallaghi (10.1016/j.media.2018.07.002_bib0030) 2015; 34 Rueckert (10.1016/j.media.2018.07.002_bib0048) 1999; 18 Hu (10.1016/j.media.2018.07.002_bib0023) 2011; 30 10.1016/j.media.2018.07.002_bib0028 10.1016/j.media.2018.07.002_bib0027 Dou (10.1016/j.media.2018.07.002_bib0009) 2017; 41 Valerio (10.1016/j.media.2018.07.002_bib0060) 2015; 68 Glorot (10.1016/j.media.2018.07.002_bib0017) 2010; 9 10.1016/j.media.2018.07.002_bib0066 10.1016/j.media.2018.07.002_bib0021 Szegedy (10.1016/j.media.2018.07.002_bib0058) 2015 10.1016/j.media.2018.07.002_bib0062 Wang (10.1016/j.media.2018.07.002_bib0068) 2016; 6 10.1016/j.media.2018.07.002_bib0070 Shi (10.1016/j.media.2018.07.002_bib0050) 2012; 15 Hu (10.1016/j.media.2018.07.002_bib0025) 2017; 64 Wu (10.1016/j.media.2018.07.002_bib0071) 2013; 16 Ronneberger (10.1016/j.media.2018.07.002_bib0047) 2015; 2015 Sun (10.1016/j.media.2018.07.002_bib0057) 2015 10.1016/j.media.2018.07.002_bib0016 Hu (10.1016/j.media.2018.07.002_bib0022) 2012; 16 10.1016/j.media.2018.07.002_bib0015 Goodfellow (10.1016/j.media.2018.07.002_bib0018) 2016 10.1016/j.media.2018.07.002_bib0059 10.1016/j.media.2018.07.002_bib0014 10.1016/j.media.2018.07.002_bib0013 (10.1016/j.media.2018.07.002_bib0054) 2014 Pinto (10.1016/j.media.2018.07.002_bib0044) 2011; 186 LeCun (10.1016/j.media.2018.07.002_bib0034) 2015 Panagiotaki (10.1016/j.media.2018.07.002_bib0042) 2015; 50 Milletari (10.1016/j.media.2018.07.002_bib0039) 2016 De Silva (10.1016/j.media.2018.07.002_bib0004) 2017; 36 Viergever (10.1016/j.media.2018.07.002_bib0064) 2016; 33 Hu (10.1016/j.media.2018.07.002_bib0026) 2018; 2018 Sokooti (10.1016/j.media.2018.07.002_bib0055) 2017 LeCun (10.1016/j.media.2018.07.002_bib0035) 1998; 86 Miao (10.1016/j.media.2018.07.002_bib0038) 2016; 35 (10.1016/j.media.2018.07.002_bib0053) 2015 Simonovsky (10.1016/j.media.2018.07.002_bib0052) 2016 10.1016/j.media.2018.07.002_bib0003 Dickinson (10.1016/j.media.2018.07.002_bib0006) 2011; 59 10.1016/j.media.2018.07.002_bib0001 Fischer (10.1016/j.media.2018.07.002_bib0012) Modat (10.1016/j.media.2018.07.002_bib0040) 2010; 98 Siddiqui (10.1016/j.media.2018.07.002_bib0051) 2015; 313 Sudre (10.1016/j.media.2018.07.002_bib0056) 2017 Jaderberg (10.1016/j.media.2018.07.002_bib0029) 2015 10.1016/j.media.2018.07.002_bib0043 Yang (10.1016/j.media.2018.07.002_bib0072) 2017; 158 Donaldson (10.1016/j.media.2018.07.002_bib0007) 2017; 197 Yu (10.1016/j.media.2018.07.002_bib0073) 2016 Wang (10.1016/j.media.2018.07.002_bib0067) 2016; 35 Vishnevskiy (10.1016/j.media.2018.07.002_bib0065) 2017; 36 Schnabel (10.1016/j.media.2018.07.002_bib0049) 2001 van de Ven (10.1016/j.media.2018.07.002_bib0061) 2015; 42 Hu (10.1016/j.media.2018.07.002_bib0024) 2015; 26 Noble (10.1016/j.media.2018.07.002_bib0041) 2016; 33 |
References_xml | – reference: Cao, X., Yang, J., Zhang, J., Nie, D., Kim, M., Wang, Q., Shen, D., 2017. Deformable image registration based on similarity-steered CNN regression, in: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). pp. 300–308. – volume: 26 start-page: 332 year: 2015 end-page: 344 ident: bib0024 article-title: Population-based prediction of subject-specific prostate deformation for MR-to-ultrasound image registration publication-title: Med. Image Anal. – year: 2015 ident: bib0053 – reference: Garcia-Peraza-Herrera, L.C., Li, W., Fidon, L., Gruijthuijsen, C., Devreker, A., Attilakos, G., Deprest, J., Poorten, E. Vander, Stoyanov, D., Vercauteren, T., others, 2017. Toolnet: holistically-nested real-time segmentation of robotic surgical tools. arXiv Prepr. – reference: Ghavami, N., Hu, Y., Bonmati, E., Rodell, R., Gibson, E., Moore, C.M., Barratt, D.C., 2018. Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks, in: SPIE medical imaging. – volume: 35 start-page: 1352 year: 2016 end-page: 1363 ident: bib0038 article-title: A CNN regression approach for real-time 2D/3D registration publication-title: IEEE Trans. Med. Imaging – reference: Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T., 2017. FlowNet 2.0: evolution of optical flow estimation with deep networks. CVPR. – volume: 18 start-page: 712 year: 1999 end-page: 721 ident: bib0048 article-title: Nonrigid registration using free-form deformations: application to breast MR images publication-title: IEEE Trans. Med. Imaging – start-page: 66 year: 2017 end-page: 72 ident: bib0074 article-title: Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images publication-title: Thirty-First AAAI Conf. Artif. Intell – volume: 9 start-page: 249 year: 2010 end-page: 256 ident: bib0017 article-title: Understanding the difficulty of training deep feedforward neural networks publication-title: PMLR – reference: Gibson, E., Giganti, F., Hu, Y., Bonmati, E., Bandula, S., Gurusamy, K., Davidson, B.R., Pereira, S.P., Clarkson, M.J., Barratt, D.C., 2017a. Towards image-guided pancreas and biliary endoscopy: Automatic multi-organ segmentation on abdominal CT with dense dilated networks, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). – volume: 2018 year: 2018 ident: bib0026 article-title: Label-driven weakly-supervised learning for multimodal deformable image registration publication-title: Biomed. Imaging (ISBI) – start-page: 1 year: 2015 end-page: 9 ident: bib0058 article-title: Going deeper with convolutions publication-title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition – reference: Wojna, Z., Ferrari, V., Guadarrama, S., Silberman, N., Chen, L.-C., Fathi, A., Uijlings, J., 2017. The Devil is in the Decoder. arXiv Prepr. – volume: 86 start-page: 2278 year: 1998 end-page: 2323 ident: bib0035 article-title: Gradient-based learning applied to document recognition publication-title: Proc. IEEE – volume: 18 start-page: 366 year: 2003 end-page: 377 ident: bib0075 article-title: A unified statistical and information theoretic framework for multi-modal image registration publication-title: Inf. Process. Med. Imaging – year: 2004 ident: bib0012 article-title: A unified approach to fast image registration and a new curvature based registration technique, in: Linear Algebra and its applications – volume: 26 start-page: 1606 year: 2016 end-page: 1612 ident: bib0063 article-title: Updated prostate imaging reporting and data system (PIRADS v2) recommendations for the detection of clinically significant prostate cancer using multiparametric MRI: critical evaluation using whole-mount pathology as standard of reference publication-title: Eur. Radiol. – year: 2016 ident: bib0073 article-title: Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness, in: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) – volume: 158 start-page: 378 year: 2017 end-page: 396 ident: bib0072 article-title: Quicksilver: fast predictive image registration – a deep learning approach publication-title: Neuroimage – volume: 10553 start-page: 204 year: 2017 end-page: 212 ident: bib0005 article-title: End-to-end unsupervised deformable image registration with a convolutional neural network. DLMIA 2017, ML-CDS 2017 publication-title: Lect. Notes Comput. Sci – year: 2015 ident: bib0029 article-title: Spatial Transformer Networks. arXiv – volume: 16 start-page: 687 year: 2012 end-page: 703 ident: bib0022 article-title: MR to ultrasound registration for image-guided prostate interventions publication-title: Med. Image Anal. – volume: 18 start-page: 2137 year: 2009 end-page: 2143 ident: bib0032 article-title: A total variation-based algorithm for pixel-level image fusion publication-title: IEEE Trans. Image Process. – volume: 64 year: 2017 ident: bib0025 article-title: Development and phantom validation of a 3-D-ultrasound-guided system for targeting MRI-visible lesions during transrectal prostate biopsy publication-title: IEEE Trans. Biomed. Eng – year: 2016 ident: bib0052 article-title: A deep metric for multimodal registration, in: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) – volume: 33 start-page: 140 year: 2016 end-page: 144 ident: bib0064 article-title: A survey of medical image registration publication-title: Med. Image Anal. – volume: 46 start-page: R1 year: 2001 end-page: R45 ident: bib0020 article-title: Medical image registration publication-title: Phys. Med. Biol. – volume: 33 start-page: 33 year: 2016 end-page: 37 ident: bib0041 article-title: Reflections on ultrasound image analysis publication-title: Med. Image Anal. – volume: 98 start-page: 278 year: 2010 end-page: 284 ident: bib0040 article-title: Fast free-form deformation using graphics processing units publication-title: Comput. Methods Programs Biomed. – year: 2001 ident: bib0049 article-title: A generic framework for non-rigid registration based on non-uniform multi-level free-form deformations, in: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) – year: 2014 ident: bib0054 – reference: Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z., 2016. Rethinking the inception architecture for computer vision, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2818–2826. – reference: Hu, Y., 2013. Registration of magnetic resonance and ultrasound images for guiding prostate cancer interventions. UCL (University College London). – volume: 197 start-page: e425 year: 2017 ident: bib0007 article-title: MP33-20 the smarttarget biopsy trial: a prospective paired blinded trial with randomisation to compare visual-estimation and image-fusion targeted prostate biopsies publication-title: J. Urol – reference: Wang, G., Li, W., Zuluaga, M.A., Pratt, R., Patel, P.A., Aertsen, M., Doel, T., David, A.L., Deprest, J., Ourselin, S., Vercauteren, T., 2017. Interactive medical image segmentation using deep learning with image-specific fine-tuning. arXiv Prepr. – reference: Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., others, 2016. Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv Prepr. – volume: 313 start-page: 390 year: 2015 end-page: 397 ident: bib0051 article-title: Comparison of MR/ultrasound fusion-guided biopsy with ultrasound-guided biopsy for the diagnosis of prostate cancer publication-title: Jama – volume: 35 start-page: 1798 year: 2013 end-page: 1828 ident: bib0002 article-title: Representation learning: a review and new perspectives publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – reference: Gibson, E., Li, W., Sudre, C., Fidon, L., Shakir, D., Wang, G., Eaton-Rosen, Z., Gray, R., Doel, T., Hu, Y., others, 2017b. NiftyNet: a deep-learning platform for medical imaging. arXiv Prepr. – start-page: 565 year: 2016 end-page: 571 ident: bib0039 article-title: V-Net: Fully convolutional neural networks for volumetric medical image segmentation publication-title: Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016 – volume: 2015 start-page: 234 year: 2015 end-page: 241 ident: bib0047 article-title: U-Net: convolutional networks for biomedical image segmentation publication-title: Med. Image Comput. Comput. Interv. MICCAI – year: 2015 ident: bib0057 article-title: Three-dimensional nonrigid MR-TRUS registration using dual optimization publication-title: IEEE Trans. Med. Imaging. – year: 2017 ident: bib0056 article-title: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations, in: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) – year: 2016 ident: bib0019 article-title: Deep Residual Learning for Image Recognition publication-title: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – volume: 30 start-page: 1887 year: 2011 end-page: 1900 ident: bib0023 article-title: Modelling prostate motion for data fusion during image-guided interventions publication-title: Med. Imaging – reference: Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L., 2016. Densely connected convolutional networks. arXiv Prepr. – reference: Liao, R., Miao, S., de Tournemire, P., Grbic, S., Kamen, A., Mansi, T., Comaniciu, D., 2017. An Artificial Agent for Robust Image Registration., in: AAAI. pp. 4168–4175. – year: 2017 ident: bib0031 article-title: Robust non-rigid registration through agent-based action learning, in: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) – reference: Lawrence, S., Burns, I., Back, A., Tsoi, A.C., Giles, C.L., 2012. Neural network classification and prior class probabilities. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 7700 LECTU, 295–309. – volume: 16 start-page: 649 year: 2013 end-page: 656 ident: bib0071 article-title: Unsupervised deep feature learning for deformable registration\nof MR brain images publication-title: Med. Image Comput. Comput. Assist. Interv. – volume: 186 start-page: 1281 year: 2011 end-page: 1285 ident: bib0044 article-title: Magnetic resonance imaging/ultrasound fusion guided prostate biopsy improves cancer detection following transrectal ultrasound biopsy and correlates with multiparametric magnetic resonance imaging publication-title: J. Urol. – volume: 36 start-page: 2010 year: 2017 end-page: 2020 ident: bib0004 article-title: Robust 2-D-3-D registration optimization for motion compensation during 3-D TRUS-guided biopsy using learned prostate motion data publication-title: IEEE Trans. Med. Imaging – reference: Lee, C.-Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z., 2015. Deeply-supervised nets, in: Artificial Intelligence and Statistics. pp. 562–570. – volume: 15 start-page: 659 year: 2012 end-page: 666 ident: bib0050 article-title: Registration using sparse free-form deformations publication-title: Med. Image Comput. Comput. Assist. Interv. – volume: 248 start-page: 390 year: 2008 ident: bib0011 article-title: Urogenital Ultrasound: A Text Atlas, 2nd ed publication-title: Radiology – year: 2015 ident: bib0034 article-title: Deep learning. Nature – volume: 50 start-page: 218 year: 2015 end-page: 227 ident: bib0042 article-title: Microstructural characterization of normal and malignant human prostate tissue with vascular, extracellular, and restricted diffusion for cytometry in tumours magnetic resonance imaging publication-title: Invest. Radiol. – volume: 191 start-page: 1749 year: 2014 end-page: 1754 ident: bib0045 article-title: Improving detection of clinically significant prostate cancer: Magnetic resonance imaging/transrectal ultrasound fusion guided prostate biopsy publication-title: J. Urol – year: 2001 ident: bib0010 article-title: The foundations of cost-sensitive learning, in: IJCAI International Joint Conference on Artificial Intelligence – reference: van de Ven, W.J.M., Hu, Y., Barentsz, J.O., Karssemeijer, N., Barratt, D., Huisman, H.J., 2013. Surface-based prostate registration with biomechanical regularization, in: SPIE Medical Imaging. p. 86711R–86711R. – start-page: 2758 year: 2015 end-page: 2766 ident: bib0008 article-title: FlowNet: learning optical flow with convolutional networks publication-title: Proceedings of the IEEE International Conference on Computer Vision – volume: 55 start-page: 1567 year: 2014 end-page: 1572 ident: bib0069 article-title: Hyperpolarized 13C MR for molecular imaging of prostate cancer publication-title: J. Nucl. Med. – year: 2017 ident: bib0046 article-title: SVF-Net: learning deformable image registration using shape matching, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) – reference: . – volume: 59 start-page: 477 year: 2011 end-page: 494 ident: bib0006 article-title: Magnetic resonance imaging for the detection, localisation, and characterisation of prostate cancer: recommendations from a European consensus meeting publication-title: Eur. Urol. – volume: 35 start-page: 589 year: 2016 end-page: 604 ident: bib0067 article-title: Towards personalized statistical deformable model and hybrid point matching for robust MR-TRUS registration publication-title: IEEE Trans. Med. Imaging – start-page: 232 year: 2017 end-page: 239 ident: bib0055 article-title: Nonrigid Image Registration Using Multi-scale 3D Convolutional Neural Networks publication-title: Medical Image Computing and Computer Assisted Intervention − MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part I – volume: 68 start-page: 8 year: 2015 end-page: 19 ident: bib0060 article-title: Detection of clinically significant prostate cancer using magnetic resonance imaging-ultrasound fusion targeted biopsy: a systematic review publication-title: Eur. Urol. – start-page: 800 year: 2016 ident: bib0018 article-title: Deep Learning–Book – volume: 34 start-page: 2535 year: 2015 end-page: 2549 ident: bib0030 article-title: Statistical biomechanical surface registration: application to MR-TRUS fusion for prostate interventions publication-title: IEEE Trans. Med. Imaging – volume: 42 start-page: 2470 year: 2015 end-page: 2481 ident: bib0061 article-title: Biomechanical modeling constrained surface-based image registration for prostate MR guided TRUS biopsy publication-title: Med. Phys. – reference: Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., Hinton, G., 2017. Regularizing neural networks by penalizing confident output distributions. arXiv Prepr. – volume: 41 start-page: 40 year: 2017 end-page: 54 ident: bib0009 article-title: 3D deeply supervised network for automated segmentation of volumetric medical images publication-title: Med. Image Anal. – volume: 36 start-page: 385 year: 2017 end-page: 395 ident: bib0065 article-title: Isotropic total variation regularization of displacements in parametric image registration publication-title: IEEE Trans. Med. Imaging – volume: 6 year: 2016 ident: bib0068 article-title: Patient-specific deformation modelling via elastography: application to image-guided prostate interventions publication-title: Sci. Rep – volume: 86 start-page: 2278 year: 1998 ident: 10.1016/j.media.2018.07.002_bib0035 article-title: Gradient-based learning applied to document recognition publication-title: Proc. IEEE doi: 10.1109/5.726791 – volume: 248 start-page: 390 year: 2008 ident: 10.1016/j.media.2018.07.002_bib0011 article-title: Urogenital Ultrasound: A Text Atlas, 2nd ed publication-title: Radiology doi: 10.1148/radiol.2482082516 – ident: 10.1016/j.media.2018.07.002_bib0013 doi: 10.1109/IROS.2017.8206462 – volume: 35 start-page: 1352 year: 2016 ident: 10.1016/j.media.2018.07.002_bib0038 article-title: A CNN regression approach for real-time 2D/3D registration publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2016.2521800 – volume: 15 start-page: 659 year: 2012 ident: 10.1016/j.media.2018.07.002_bib0050 article-title: Registration using sparse free-form deformations publication-title: Med. Image Comput. Comput. Assist. Interv. – ident: 10.1016/j.media.2018.07.002_bib0033 doi: 10.1007/978-3-642-35289-8_19 – volume: 191 start-page: 1749 year: 2014 ident: 10.1016/j.media.2018.07.002_bib0045 article-title: Improving detection of clinically significant prostate cancer: Magnetic resonance imaging/transrectal ultrasound fusion guided prostate biopsy publication-title: J. Urol doi: 10.1016/j.juro.2013.12.007 – volume: 18 start-page: 366 year: 2003 ident: 10.1016/j.media.2018.07.002_bib0075 article-title: A unified statistical and information theoretic framework for multi-modal image registration publication-title: Inf. Process. Med. Imaging – ident: 10.1016/j.media.2018.07.002_bib0001 – start-page: 66 year: 2017 ident: 10.1016/j.media.2018.07.002_bib0074 article-title: Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images publication-title: Thirty-First AAAI Conf. Artif. Intell – ident: 10.1016/j.media.2018.07.002_bib0037 doi: 10.1609/aaai.v31i1.11230 – volume: 36 start-page: 385 year: 2017 ident: 10.1016/j.media.2018.07.002_bib0065 article-title: Isotropic total variation regularization of displacements in parametric image registration publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2016.2610583 – start-page: 800 year: 2016 ident: 10.1016/j.media.2018.07.002_bib0018 – volume: 10553 start-page: 204 year: 2017 ident: 10.1016/j.media.2018.07.002_bib0005 article-title: End-to-end unsupervised deformable image registration with a convolutional neural network. DLMIA 2017, ML-CDS 2017 publication-title: Lect. Notes Comput. Sci doi: 10.1007/978-3-319-67558-9_24 – year: 2017 ident: 10.1016/j.media.2018.07.002_bib0046 – volume: 16 start-page: 649 year: 2013 ident: 10.1016/j.media.2018.07.002_bib0071 article-title: Unsupervised deep feature learning for deformable registration\nof MR brain images publication-title: Med. Image Comput. Comput. Assist. Interv. – start-page: 1 year: 2015 ident: 10.1016/j.media.2018.07.002_bib0058 article-title: Going deeper with convolutions – year: 2016 ident: 10.1016/j.media.2018.07.002_bib0073 – ident: 10.1016/j.media.2018.07.002_bib0010 – volume: 2018 year: 2018 ident: 10.1016/j.media.2018.07.002_bib0026 article-title: Label-driven weakly-supervised learning for multimodal deformable image registration publication-title: Biomed. Imaging (ISBI) – ident: 10.1016/j.media.2018.07.002_bib0062 doi: 10.1117/12.2007580 – volume: 313 start-page: 390 year: 2015 ident: 10.1016/j.media.2018.07.002_bib0051 article-title: Comparison of MR/ultrasound fusion-guided biopsy with ultrasound-guided biopsy for the diagnosis of prostate cancer publication-title: Jama doi: 10.1001/jama.2014.17942 – year: 2015 ident: 10.1016/j.media.2018.07.002_bib0034 – ident: 10.1016/j.media.2018.07.002_bib0015 doi: 10.1007/978-3-319-66182-7_83 – volume: 26 start-page: 1606 year: 2016 ident: 10.1016/j.media.2018.07.002_bib0063 article-title: Updated prostate imaging reporting and data system (PIRADS v2) recommendations for the detection of clinically significant prostate cancer using multiparametric MRI: critical evaluation using whole-mount pathology as standard of reference publication-title: Eur. Radiol. doi: 10.1007/s00330-015-4015-6 – volume: 68 start-page: 8 year: 2015 ident: 10.1016/j.media.2018.07.002_bib0060 article-title: Detection of clinically significant prostate cancer using magnetic resonance imaging-ultrasound fusion targeted biopsy: a systematic review publication-title: Eur. Urol. doi: 10.1016/j.eururo.2014.10.026 – volume: 35 start-page: 1798 year: 2013 ident: 10.1016/j.media.2018.07.002_bib0002 article-title: Representation learning: a review and new perspectives publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2013.50 – start-page: 232 year: 2017 ident: 10.1016/j.media.2018.07.002_bib0055 article-title: Nonrigid Image Registration Using Multi-scale 3D Convolutional Neural Networks – volume: 33 start-page: 140 year: 2016 ident: 10.1016/j.media.2018.07.002_bib0064 article-title: A survey of medical image registration publication-title: Med. Image Anal. doi: 10.1016/j.media.2016.06.030 – year: 2016 ident: 10.1016/j.media.2018.07.002_bib0052 – ident: 10.1016/j.media.2018.07.002_bib0016 doi: 10.1016/j.cmpb.2018.01.025 – ident: 10.1016/j.media.2018.07.002_bib0059 doi: 10.1109/CVPR.2016.308 – year: 2016 ident: 10.1016/j.media.2018.07.002_sbref0013 article-title: Deep Residual Learning for Image Recognition – start-page: 565 year: 2016 ident: 10.1016/j.media.2018.07.002_bib0039 article-title: V-Net: Fully convolutional neural networks for volumetric medical image segmentation – year: 2001 ident: 10.1016/j.media.2018.07.002_bib0049 – year: 2017 ident: 10.1016/j.media.2018.07.002_bib0031 – volume: 26 start-page: 332 year: 2015 ident: 10.1016/j.media.2018.07.002_bib0024 article-title: Population-based prediction of subject-specific prostate deformation for MR-to-ultrasound image registration publication-title: Med. Image Anal. doi: 10.1016/j.media.2015.10.006 – ident: 10.1016/j.media.2018.07.002_bib0021 – ident: 10.1016/j.media.2018.07.002_bib0012 – volume: 186 start-page: 1281 year: 2011 ident: 10.1016/j.media.2018.07.002_bib0044 article-title: Magnetic resonance imaging/ultrasound fusion guided prostate biopsy improves cancer detection following transrectal ultrasound biopsy and correlates with multiparametric magnetic resonance imaging publication-title: J. Urol. doi: 10.1016/j.juro.2011.05.078 – volume: 18 start-page: 712 year: 1999 ident: 10.1016/j.media.2018.07.002_bib0048 article-title: Nonrigid registration using free-form deformations: application to breast MR images publication-title: IEEE Trans. Med. Imaging doi: 10.1109/42.796284 – volume: 46 start-page: R1 year: 2001 ident: 10.1016/j.media.2018.07.002_bib0020 article-title: Medical image registration publication-title: Phys. Med. Biol. doi: 10.1088/0031-9155/46/3/201 – volume: 18 start-page: 2137 year: 2009 ident: 10.1016/j.media.2018.07.002_bib0032 article-title: A total variation-based algorithm for pixel-level image fusion publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2009.2025006 – volume: 98 start-page: 278 year: 2010 ident: 10.1016/j.media.2018.07.002_bib0040 article-title: Fast free-form deformation using graphics processing units publication-title: Comput. Methods Programs Biomed. doi: 10.1016/j.cmpb.2009.09.002 – year: 2017 ident: 10.1016/j.media.2018.07.002_bib0056 – ident: 10.1016/j.media.2018.07.002_bib0070 doi: 10.5244/C.31.10 – year: 2015 ident: 10.1016/j.media.2018.07.002_bib0053 – start-page: 2758 year: 2015 ident: 10.1016/j.media.2018.07.002_bib0008 article-title: FlowNet: learning optical flow with convolutional networks – volume: 30 start-page: 1887 year: 2011 ident: 10.1016/j.media.2018.07.002_bib0023 article-title: Modelling prostate motion for data fusion during image-guided interventions publication-title: Med. Imaging doi: 10.1109/TMI.2011.2158235 – volume: 35 start-page: 589 year: 2016 ident: 10.1016/j.media.2018.07.002_bib0067 article-title: Towards personalized statistical deformable model and hybrid point matching for robust MR-TRUS registration publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2015.2485299 – ident: 10.1016/j.media.2018.07.002_bib0003 doi: 10.1007/978-3-319-66182-7_35 – ident: 10.1016/j.media.2018.07.002_bib0014 doi: 10.1117/12.2293300 – year: 2015 ident: 10.1016/j.media.2018.07.002_bib0029 – volume: 197 start-page: e425 year: 2017 ident: 10.1016/j.media.2018.07.002_bib0007 article-title: MP33-20 the smarttarget biopsy trial: a prospective paired blinded trial with randomisation to compare visual-estimation and image-fusion targeted prostate biopsies publication-title: J. Urol doi: 10.1016/j.juro.2017.02.1016 – year: 2014 ident: 10.1016/j.media.2018.07.002_bib0054 – volume: 59 start-page: 477 year: 2011 ident: 10.1016/j.media.2018.07.002_bib0006 article-title: Magnetic resonance imaging for the detection, localisation, and characterisation of prostate cancer: recommendations from a European consensus meeting publication-title: Eur. Urol. doi: 10.1016/j.eururo.2010.12.009 – volume: 33 start-page: 33 year: 2016 ident: 10.1016/j.media.2018.07.002_bib0041 article-title: Reflections on ultrasound image analysis publication-title: Med. Image Anal. doi: 10.1016/j.media.2016.06.015 – ident: 10.1016/j.media.2018.07.002_bib0066 doi: 10.1109/TMI.2018.2791721 – ident: 10.1016/j.media.2018.07.002_bib0036 – volume: 2015 start-page: 234 year: 2015 ident: 10.1016/j.media.2018.07.002_bib0047 article-title: U-Net: convolutional networks for biomedical image segmentation publication-title: Med. Image Comput. Comput. Interv. MICCAI – ident: 10.1016/j.media.2018.07.002_bib0028 doi: 10.1109/CVPR.2017.179 – volume: 36 start-page: 2010 year: 2017 ident: 10.1016/j.media.2018.07.002_bib0004 article-title: Robust 2-D-3-D registration optimization for motion compensation during 3-D TRUS-guided biopsy using learned prostate motion data publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2017.2703150 – volume: 6 year: 2016 ident: 10.1016/j.media.2018.07.002_bib0068 article-title: Patient-specific deformation modelling via elastography: application to image-guided prostate interventions publication-title: Sci. Rep – volume: 158 start-page: 378 year: 2017 ident: 10.1016/j.media.2018.07.002_bib0072 article-title: Quicksilver: fast predictive image registration – a deep learning approach publication-title: Neuroimage doi: 10.1016/j.neuroimage.2017.07.008 – volume: 50 start-page: 218 year: 2015 ident: 10.1016/j.media.2018.07.002_bib0042 article-title: Microstructural characterization of normal and malignant human prostate tissue with vascular, extracellular, and restricted diffusion for cytometry in tumours magnetic resonance imaging publication-title: Invest. Radiol. doi: 10.1097/RLI.0000000000000115 – volume: 64 year: 2017 ident: 10.1016/j.media.2018.07.002_bib0025 article-title: Development and phantom validation of a 3-D-ultrasound-guided system for targeting MRI-visible lesions during transrectal prostate biopsy publication-title: IEEE Trans. Biomed. Eng doi: 10.1109/TBME.2016.2582734 – ident: 10.1016/j.media.2018.07.002_bib0043 – ident: 10.1016/j.media.2018.07.002_bib0027 doi: 10.1109/CVPR.2017.243 – volume: 55 start-page: 1567 year: 2014 ident: 10.1016/j.media.2018.07.002_bib0069 article-title: Hyperpolarized 13C MR for molecular imaging of prostate cancer publication-title: J. Nucl. Med. doi: 10.2967/jnumed.114.141705 – volume: 16 start-page: 687 year: 2012 ident: 10.1016/j.media.2018.07.002_bib0022 article-title: MR to ultrasound registration for image-guided prostate interventions publication-title: Med. Image Anal. doi: 10.1016/j.media.2010.11.003 – year: 2015 ident: 10.1016/j.media.2018.07.002_bib0057 article-title: Three-dimensional nonrigid MR-TRUS registration using dual optimization publication-title: IEEE Trans. Med. Imaging. doi: 10.1109/TMI.2014.2375207 – volume: 9 start-page: 249 year: 2010 ident: 10.1016/j.media.2018.07.002_bib0017 article-title: Understanding the difficulty of training deep feedforward neural networks publication-title: PMLR – volume: 34 start-page: 2535 year: 2015 ident: 10.1016/j.media.2018.07.002_bib0030 article-title: Statistical biomechanical surface registration: application to MR-TRUS fusion for prostate interventions publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2015.2443978 – volume: 41 start-page: 40 year: 2017 ident: 10.1016/j.media.2018.07.002_bib0009 article-title: 3D deeply supervised network for automated segmentation of volumetric medical images publication-title: Med. Image Anal. doi: 10.1016/j.media.2017.05.001 – volume: 42 start-page: 2470 year: 2015 ident: 10.1016/j.media.2018.07.002_bib0061 article-title: Biomechanical modeling constrained surface-based image registration for prostate MR guided TRUS biopsy publication-title: Med. Phys. doi: 10.1118/1.4917481 |
SSID | ssj0007440 |
Score | 2.663298 |
Snippet | •A method to infer voxel-level correspondence from higher-level anatomical labels.•Efficient and fully-automated registration for MR and ultrasound prostate... One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence.... |
SourceID | pubmedcentral proquest pubmed crossref elsevier |
SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 1 |
SubjectTerms | 3-D printers Artificial neural networks Centroids Convolutional neural network Deformation Formability Genetic transformation Glands Image registration Image-guided intervention Inference Labels Magnetic resonance imaging Medical image registration Medical imaging Neural networks NMR Nuclear magnetic resonance Organs Patients Prostate cancer Training Ultrasound Weakly-supervised learning |
Title | Weakly-supervised convolutional neural networks for multimodal image registration |
URI | https://dx.doi.org/10.1016/j.media.2018.07.002 https://www.ncbi.nlm.nih.gov/pubmed/30007253 https://www.proquest.com/docview/2126555326 https://www.proquest.com/docview/2070250547 https://pubmed.ncbi.nlm.nih.gov/PMC6742510 |
Volume | 49 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB6VrYTggEp5BUoVJI6Y3dixvTkuVcvyaCUEhd4ivyIW2uyK3T1w6W_vTOJEXUA9cIqS2JFjj7-ZxJ-_AXhpnOfSGMnQNWmWey3Y2PqCOVEYbp232Yg2Ch-fqOlp_v5Mnm3BQbcXhmiVEftbTG_QOl4Zxt4cLmaz4edMULIS9FhjQWke8RNom4tCyQFsT959mJ70gEwaeO32q4xRhU58qKF5NRs0iOI1bkQ84--VfziovwPQP3mU1xzT0Q7cixFlOmkbfR-2Qr0Ld6_pDO7C7eO4gv4APn0L5uf5b7ZcLwgllsGnRDyPBojPIYHL5tDQw5cpBrVpwzq8mHu8PrtAAEopnUMnuPsQTo8OvxxMWUyrwJyUYsVsLjUfZ6pylRfaS6ms51WotPQGZ2eO76N0YVWlsswWPDhZyKrCb1nlVBBWikcwqOd1eAKpsSJzGKDpUUDElR5rKY_OsCo418bkCfCuL0sXNccp9cV52ZHLfpTNAJQ0AOWI1sJ5Aq_6SotWcuPm4qobpHLDckp0CjdX3OuGtIwTd1li45XEbuIqgRf9bZxytI5i6jBfYxmESYocc53A49YC-oYKsjcuRQJ6wzb6AiTnvXmnnn1vZL2VRvzMRk__932ewR06a5mGezBY_VqH5xgxrew-3Hp9me3HeYHHt28-fp1cAaM9GR0 |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB5VrcTjgKC8QgsEiSNmN3Zsb46oolqgWwnRit4sO3bEQptdsbsHLv3tnXGcpQuoB06R4nHkx3hmHH_-BuC1rT2X1kqGrkmz0mvBRs5XrBaV5a72rhjSReHJsRqflh_P5NkWHPR3YQhWmWx_Z9OjtU5vBmk0B_PpdPClEJSsBD3WSFCaR9wC7ZRSaML1vb38jfMgBrzu8lXBSLynHoogr3g9gwBeo0jhmX6u_MM9_R1-_omivOaWDu_DvRRP5u-6Jj-ArdDuwt1rLIO7cGuSzs8fwuevwf44_8UWqznZiEXwOcHOk_rhd4jeMj4iOHyRY0ibR8zhxczj--kFmp-ckjn0dLuP4PTw_cnBmKWkCqyWUiyZK6Xmo0I1deOF9lIq53kTGi29xbVZYn-UrpxqVFG4iodaVrJpcCerahWEk-IxbLezNjyF3DpR1Bie6WFAeys91lIeXWFTca6tLTPg_ViaOjGOU-KLc9NDy76bOAGGJsAM6SScZ_BmXWneEW7cLK76STIbemPQJdxccb-fUpOW7cJg45XEYeIqg1frYlxwdIpi2zBboQwaSYobS53Bk04D1g0VpG9cigz0hm6sBYjMe7OknX6LpN5Ko_Ushs_-tz8v4fb4ZHJkjj4cf9qDO1TSYQ73YXv5cxWeY-y0dC_i2rgCRk4YWQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Weakly-supervised+convolutional+neural+networks+for+multimodal+image+registration&rft.jtitle=Medical+image+analysis&rft.au=Hu%2C+Yipeng&rft.au=Modat%2C+Marc&rft.au=Gibson%2C+Eli&rft.au=Li%2C+Wenqi&rft.date=2018-10-01&rft.eissn=1361-8423&rft.volume=49&rft.spage=1&rft_id=info:doi/10.1016%2Fj.media.2018.07.002&rft_id=info%3Apmid%2F30007253&rft.externalDocID=30007253 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1361-8415&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1361-8415&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1361-8415&client=summon |