To help neurosurgeons diagnose brain tumors more efficiently, researchers from the University of Michigan developed a deep learning-based imaging technique that can reduce the tumor diagnosis process during surgery from 30-40 minutes to less than three minutes.
First unveiled in 2017, the technique called stimulated Rama histology (SRH) helps neurosurgeons more rapidly assess tumor tissue in the operating room. The tumors are classified using deep learning, and intraoperatively generated digital images from the fresh resected specimens.
In a new Nature Medicine paper, Todd Hollon, M.D., a chief neurosurgical resident at Michigan Medicine, and Daniel Orringer, M.D., and associate professor of neurosurgery at NYU Langone Health, and colleagues, describe their most recent clinical trial of the technique.
“This is the first prospective trial evaluating the use of artificial intelligence in the operating room,” said Hollon, lead author of the publication. “We have executed clinical translation of an AI-based workflow.”
Since it was first developed, the technique has already been used on over 500 patients as a first-line diagnostic tool for neurosurgery and otolaryngology, the researchers said.
Using a Convolutional Neural Network (CNN) trained with CUDA, NVIDIA RTX GPUs, and the cuDNN-accelerated TensorFlow deep learning framework, the CNN model, with output classes covering more than 90% of all brain tumors diagnosed in the USA, achieved 94.6% classification accuracy.
The CNN used for this model was trained on more than 2.5 million images from 415 patients, obtained via stimulated Raman histology.
“The results reported in our study represent the culmination of a 9-year journey at Michigan Medicine to develop and implement a better way to do brain tumor surgery- one that leverages advances in optics and artificial intelligence decision- to make safer, more effective decisions in the operating room,” said Orringer, the senior author of the Nature article.