As with any disease, spotting breast cancer early is key to the best treatment delivery and outcomes. Imaging through mammograms provides high-quality images that can then be read by radiologists who look for telltale signs.
Once a growth has been identified, the question then is whether it is benign or malignant. The best way to tell is by taking a biopsy, but that’s an invasive procedure, and errors still occur. Some patients are told they have cancer when there actually isn’t any, while other have the reverse and their treatment is then delayed.
One key is thought to be better ability to detect if a growth is malignant or benign without having to resort to taking a biopsy. A recent research study sought to do that with a new take on who would be doing the diagnosing — artificial intelligence. The results of the study showed encouraging results.
A relatively new diagnostic test for breast cancer is called ultrasound elastography. This tests the stiffness of breast tissue. This can be effective because breast tissue behaves differently in areas of cancerous tissue. Ultrasound elastography vibrates the breast tissue, and this creates a wave. This wave causes distortion in the ultrasound scan, highlighting the areas of the breast where properties differ from the surrounding tissue.
It is this difference in the tissue properties that allows the doctor to determine whether a lesion is cancerous or benign. The problem with ultrasound elastography, however, is that analyzing the results is time consuming, involves several steps, and requires complex problem solving. That keeps it from becoming a main line approach for diagnostics.
That’s what the study by researchers from the Viterbi School of Engineering at the University of Southern California sought to change. Their study put an algorithm on the job. The thought was that this could reduce the steps necessary to draw information from the ultrasound elastography images.
The team, led by Professor Assad Oberai of USC, recently published its findings in the journal Computer Methods in Applied Mechanics and Engineering.
For the study, the researchers used synthetic data rather than genuine ultrasound elastography scans. This was due to the lack of available actual images. Professor Oberai explained that they could maybe have gotten use of 1,000 actual medical images, but they used over 12,000 synthetic images to train the algorithm. The team felt this allowed a more extensive first test of the potential of algorithms.
The researchers wanted to see whether they could train an algorithm to differentiate between malignant and benign lesions in breast scans.
The algorithm was 100% accurate on these synthetic images. So, the next step was to use real images, but the team had access to just 10 scans. Five had malignant lesions and five had benign lesions.
When using the real images, the algorithm got 8 out of the 10 right, an 80% accuracy rate. Obviously, this isn’t accurate enough, although it did shorten the process extensively.
The study noted that if they could have trained the algorithm using real data from real scans they might have been able to achieve improved accuracy. Obviously, that is the next phase of this research — to use only actual elastography scans.
Dr. Oberai said the goal of this study wasn’t to replace a human diagnostic expert. He said the goal was to find triggers that raised warning flags in the algorithm, which could then be further investigated by the human diagnostic professional.
“These algorithms will be most useful when they do not serve as black boxes,” he explains. “What did it see that led it to the final conclusion? The algorithm must be explainable for it to work as intended.”
This is early in the process of using AI in cancer diagnosis, but the results of this study show considerable of promise moving forward.