Transthoracic echocardiography is the most widely used imaging technique in cardiology. The examinations are typically performed following a protocol where an ultrasound probe is placed on the chest of a subject in order to acquire different standard views of the heart.

Østvik_views1

These images are used as reference when assessing cardiac function, and it is essential that the morphophysiological representations are correct. Image quality varies substantially between patients and is also operator dependent, which increases interobserver variability and lowers feasibility of detailed quantitative measurements in the clinic. Further, a new group of users is adopting echocardiography through the introduction of hand-held devices, making ultrasound available in the field and outpatient clinics. Increased availability is great news, but it also triggers a range of new challenges. Training opportunities are limited, and expert knowledge is often captivated by workload. How could these increasingly relevant problems be solved?

At the Centre of Innovative Ultrasound Solutions (CIUS) we believe modern machine learning methods, such as deep learning, can provide necessary solutions. In recent years, deep learning has received incredible attention by providing impressive results on image, speech and text recognition tasks. As opposed to traditional machine learning approaches with hand-crafted features, these methods learn both the feature extraction and classification directly from the training data. This is particularly interesting for ultrasound, where hand-crafting generic features can be extremely difficult. For details about the method, applications and challenges, read Postdoc Erik Smistad’s interesting blog on the topic.

Work package 4, led by Professor Lasse Løvstakken, covers new image processing and analysis methods for extracting relevant data from ultrasound images, and enhanced data visualization tools for displaying clinical information in an intuitive way. Our machine learning team is growing rapidly, and currently we are using deep learning to make intelligent systems that could standardize and quality assure the acquisition process, and ultimately help the operator achieve the best possible images for a given patient. Systems that could confirm that the images are good enough for quantitative measures, and also perform them fast and automatically on-site by extracting the relevant information directly from the images.

Two of the most essential building blocks of our systems are the view classification and segmentation algorithms. For both tasks, we employ convolutional neural networks trained on a vast amount of annotated ultrasound data. The annotations are performed by experts, which in turn incorporates their knowledge in the models. Our view classification tells us what we are looking at, for example, if it is one of the standard views shown in the video above. This can induce perception logic, and if images are considered applicable for segmentation, this is performed (see video below). The segmentation partitions the image into relevant objects by labeling the pixels to a known class, for instance the endocardial border of the left ventricle as can be seen in the video. Together, view classification and segmentation constitute a natural pipeline, that can facilitate automatic measurements and acquisition guidance. This can prove to be helpful for less experienced users, in point-of-care situations, and also reduce the workload in the outpatient clinic.

Østvik_segmentation2

Undoubtedly, we see that deep learning has the potential of improving clinical workflow and diagnostic efficacy in echocardiography. At CIUS we will continue to pursue these methods, deliver novel and thoroughly evaluated solutions, and foremost, knowledge that will make it an essential component of tomorrow’s health care.

Andreas Østvik
Researcher | andreas.ostvik@ntnu.no | Website | + posts

Researcher CIUS, NTNU, 2020-

Researcher, SINTEF, 2018-

PhD Candidate, CIUS, 2016-2020