Narrowing the distribution of ultrasound image quality using machine learning and deep learning

Ultrasound image quality varies substantially across different subjects. In some cases, this means ultrasound images are non-diagnostic. Overcoming these non-diagnostic exams is a common goal for advanced ultrasound beamforming algorithms. Recently, new beamforming approaches using machine learning...

Full description

Saved in:
Bibliographic Details
Published inThe Journal of the Acoustical Society of America Vol. 152; no. 4; p. A74
Main Authors Khan, Christopher, Pan, Ying-Chun, Byram, Brett
Format Journal Article
LanguageEnglish
Published 01.10.2022
Online AccessGet full text
ISSN0001-4966
1520-8524
DOI10.1121/10.0015593

Cover

More Information
Summary:Ultrasound image quality varies substantially across different subjects. In some cases, this means ultrasound images are non-diagnostic. Overcoming these non-diagnostic exams is a common goal for advanced ultrasound beamforming algorithms. Recently, new beamforming approaches using machine learning and deep learning have been proposed by a number of groups to overcome ultrasound’s image quality issues. Our group has proposed several methods relying on both machine learning and deep learning approaches. We will also show how physics-based machine learning methods can lead directly to deep learning methods, and we can use the development and performance of these methods to generate insight into the underlying structure of ultrasound data. We will also show that rather than leading to artificial gains, deep learning methods can be used to actually increase the available information in the form of improved dynamic range compared to delay and sum beamforming. The improvement is 15–20 dB, and we can achieve this improvement in both clean and highly cluttered data. Finally, we will show that ultrasound beamformers can be trained with unlabeled in vivo data in order to learn the underlying distribution of clutter in particular in vivo scenarios (e.g. echocardiography). This leads to improvements in imaging performance and can be used to generate insight into the interaction of different sources of image degradation in vivo.
ISSN:0001-4966
1520-8524
DOI:10.1121/10.0015593