Detecting cancer with AI

Friday, 30 April, 2021

Detecting cancer with AI

An innovative artificial intelligence (AI) application could help examine tissue samples and identify signs of cancer. PathoFusion was developed by an international collaboration led by Associate Professor Xiu Ying Wang and Professor Manuel Graeber of the University of Sydney, with support from the Australian Nuclear Science and Technology Organisation (ANSTO).

Study co-author Richard Banati, an ANSTO Professor of Medical Radiation Sciences/Medical Imaging, said, “The idea behind PathoFusion was to create a novel, advanced, deep learning model to recognise malignant features and immune response markers, independent of human intervention, and map them simultaneously in a digital image.”

A bifocal deep learning framework was designed using a convolutional neural network (ConvNet/CNN), which was originally developed for natural image classification. This deep learning algorithm can take in an input image, assign importance to various aspects/objects in the image and differentiate one from another.

The experiment to evaluate the model involved examining tissue from cases of glioblastoma, an aggressive cancer that affects the brain or spine. The team used the expert input of neuropathologists to ‘train’ the software to mark key features. The findings were published in the journal Cancers.

Experiments confirmed that the application achieved a high level of accuracy in recognising and mapping six typical neuropathological features that are markers of a malignancy. PathoFusion identified forms and structural features with a precision of 94% and sensitivity of 94.7%, and immune markers at a precision of 96.2% and sensitivity of 96.1%.

The application combines layers of information about dead or dying tissue, the proliferation of microscopic blood vessels and other vasculature with the expression of a tumour genetic marker, CD276, in an image that combines the data in a heatmap. The image uses strong colours to depict the features and their distribution. Conventional staining techniques are often monochromatic.

“The research confirmed that it is possible to train neural networks effectively using only a relatively small number of cases; that should be useful for some scenarios,” Banati said.

The research was successful in efficiently training a convolutional neural network to recognise key features in stained slides; improving the model and increasing the effectiveness of feature recognition (with fewer physical cases than conventionally needed for neural network training); and establishing a method to include immunological data.

Image caption: A fusion heatmap of cancerous structures in H&E image and immunopositivity of CD276 marker. Image credit: The University of Sydney.

Related News

Real-world data critical to study of COVID vaccine in pregnant women

Researchers at the CDC have used patient data stored in the Oracle-developed v-safe system to...

Artificial super stool: a game changer for gut conditions

Collaborative research aims to create a new generation of microbial therapies that can replace...

Australians want ongoing GP video telehealth services

A survey finds 70% of Australians think GPs should offer video telehealth services permanently,...

  • All content Copyright © 2021 Westwick-Farrow Pty Ltd