Using more precise information about the characteristics of a cancerous versus non-cancerous breast lesion, this methodology using Artificial Intelligence (AI) has demonstrated more accuracy compared to traditional modes of imaging.
In the study published in the journal Computer Methods in Applied Mechanics and Engineering, Indian-origin researchers Dhruv Patel and Assad Oberai from the University of Southern California showed that it is possible to train a machine to interpret real-world images using synthetic data and streamline the steps to diagnosis.
In the case of breast ultrasound elastography, once an image of the affected area is taken, it is analysed to determine displacements inside the tissue. Using this data and the physical laws of mechanics, the spatial distribution of mechanical properties, like its stiffness, is determined.
In the study, researchers sought to determine if they could skip the most complicated steps of this workflow.
For this, the researchers used about 12,000 synthetic images to train their Machine Learning algorithm. This process was similar to how photo identification software works, i.e learning through repeated inputs on how to recognize a particular person in an image, or how our brain learns to classify a cat versus a dog.
Through enough examples, the algorithm was able to glean different features inherent to a benign tumor versus a malignant tumor and make the correct determination.
The researchers achieved nearly 100 per cent classification accuracy on synthetic images. Once the algorithm was trained, they tested it on real-world images to determine how accurate it could be in providing a diagnosis, measuring these results against biopsy-confirmed diagnoses associated with these images.
“We had about 80 per cent accuracy rate. We will continue to refine the algorithm by using more real-world images as inputs,” Oberai said.
Really cool stuff! Thanks for posting. Would be useful if you can share the link to the paper