Go deep: Artificial intelligence improves the accuracy of breast ultrasound diagnoses

Scientists train deep learning algorithms to classify breast lesions based on ultrasound images in a large multicenter study

In 2020, the World Health Organization's International Agency for Research on Cancer stated that breast cancer causes the most cancers and mortality rates in women worldwide. These alarming statistics not only call for newer methods for the early detection of breast cancer, but also highlight the importance of risk prognosis for the occurrence and development of this disease. Ultrasound is a powerful, non-invasive diagnostic tool that really saves lives. However, it is sometimes difficult for ultrasound doctors to differentiate between malignant tumors and other types of benign growths. In China in particular, breast masses are divided into four categories: benign tumors, malignant tumors, inflammatory masses, and adenosis (enlargement of the milk-producing glands). If a benign breast mass is mistakenly diagnosed as a malignant tumor, a biopsy is usually followed, which puts the patient at unnecessary risk. Correct interpretation of ultrasound images becomes even more difficult when the large workload of specialist doctors is taken into account.

Ultrasound is an invaluable diagnostic tool for the early detection of breast cancer, but classifying lesions is sometimes difficult and time-consuming. Could artificial intelligence be the answer to solving these problems? Photo credit: Chinese Medical Journal

Could deep learning algorithms be the solution to this puzzle? Professor Wen He (Beijing Tian Tan Hospital, Capital Medical University, China) believes so. "Artificial intelligence is good at identifying complex patterns in images and quantifying information that people find difficult to see, complementing clinical decision-making," he explains. Although great progress has been made in integrating deep learning algorithms into medical image analysis, most breast ultrasound studies deal exclusively with the distinction between malignant and benign diagnoses. In other words, existing approaches do not attempt to divide breast mass into the four categories above.

To overcome this limitation, Dr. In collaboration with scientists from 13 hospitals in China, He is carrying out the largest multicenter study on breast ultrasound to date in order to train convolutional neural networks (CNNs) to classify ultrasound images. As in your in Chinese medicine journalThe scientists collected 15,648 images from 3,623 patients and used half to exercise and the other half to test three different CNN models. The first model only used 2D ultrasound intensity images as input, while the second model also included color flow Doppler images, which provide information about blood flow around breast lesions. The third model also added pulsed Doppler wave images, which provide spectral information about a specific area within the lesions.

Each CNN consisted of two modules. The first, the detection module, contained two main sub-modules, the overall task of which was to determine the position and size of the breast lesion in the original 2D ultrasound image. The second module, the classification module, received only the extracted part from the ultrasound images that contained the detected lesion. The output layer contained four categories corresponding to each of the four classifications of breast mass commonly used in China.

First, the scientists checked which of the three models performed better. The accuracies were similar, at 88%, but the second model with 2D images and color flow Doppler data performed slightly better than the other two. The reason the pulsed wave Doppler data did not contribute positively to performance may be that only a few pulsed wave images were available in the entire data set. The researchers then checked whether differences in tumor size led to differences in performance. While larger lesions resulted in increased accuracy in benign tumors, size did not appear to affect accuracy in detecting malignancies. Finally, the scientists tested one of their CNN models by comparing its performance with that of 37 skilled sonologists using a set of 50 randomly selected images. The results have been very positive for CNN in every way, as Dr. He notes, “The accuracy of the CNN model was 89.2% with a processing time of less than two seconds. In contrast, the sonologist's average accuracy was 30% with an average time of 314 seconds. "

This study clearly shows the capabilities of deep learning algorithms as complementary tools for diagnosing breast lesions using ultrasound. In addition, unlike previous studies, the researchers included data obtained using ultrasound machines from various manufacturers, indicating the remarkable applicability of the trained CNN models regardless of the ultrasound machines in each hospital. In the future, the integration of artificial intelligence into diagnostic procedures using ultrasound could accelerate the early detection of cancer. This would bring other benefits as well, such as Dr. He explains, "Since CNN models do not require special equipment, their diagnostic recommendations could reduce pre-set biopsies, simplify the workload of sonologists, and enable more targeted and refined treatment."

Let's hope artificial intelligence will soon find a home in ultrasound imaging so doctors can work smarter, not harder.

reference

Original Article Title: Deep Learning for Two-Dimensional Color Doppler Flow Imaging Ultrasound Images Greatly Improves Diagnostic Performance in Classification of Breast Masses: A Multicenter Study

Diary: Chinese medicine journal

DOI: 10.1097 / CM9.0000000000001329

Source: EurekAlert!

<! –

comment these news or articles

->

Leave a Reply

Your email address will not be published. Required fields are marked *