Computer Vision / Video Analytics

Automatically Segmenting Brain Tumors with AI

Each year tens of thousands of people in the United States are diagnosed with a brain tumor. To help physicians more effectively analyze, treat, and monitor tumors, NVIDIA researchers have developed a robust deep learning-based technique that uses 3D magnetic resonance images to automatically segment tumors. Segmentation provides tumor boundary definition of the affected region.
In countries with a shortage of trained experts, the technology could one day serve as a life-saving tool that helps patients receive the care they need.

ImFusion visualization software helped the team run inference and visualize the results directly from the GUI, with easy configuration setup, the team said.
“Automated segmentation of 3D brain tumors can save physicians time and provide an accurate reproducible solution for further tumor analysis and monitoring. In this work, we describe our semantic segmentation approach for volumetric 3D brain tumor segmentation from multimodal 3D MRIs, which won the BraTS 2018 challenge,” said Andriy Myronenko, a senior research scientist at NVIDIA.

A typical segmentation example with true and predicted labels overlaid over T1c MRI axial, sagittal and coronal slices. The whole tumor (WT) class includes all visible labels (a union of green, yellow and red labels), the tumor core (TC) class is a union of red and yellow, and the enhancing tumor core (ET) class is shown in yellow (a hyperactive tumor part). The predicted segmentation results match the ground truth well.

The BraTS challenge, or the Multimodal Brain Tumor Segmentation Challenge, is an international competition focused on the segmentation of brain tumors. The challenge is organized by the University of Pennsylvania’s Perelman School of Medicine.
In developing the work, Myronenko focused on gliomas, one of the most common types of primary brain tumors. High-grade gliomas are an aggressive type of malignant brain tumor and the best tools physicians have to diagnose them are magnetic resonance images. However, the process of manually delineating an image, or determining the exact position of a border or boundary of the tumor requires anatomical expertise. The process is also expensive, and prone to human error. That is why automatic segmentation is such an important tool.
Using data from 19 institutions and several MRI scanners, Myronenko trained an encoder-decoder convolutional neural network to extract the features of a brain MRI.
Schematic visualization of the network architecture. Input is a four channel 3D MRI crop, followed by initial 3x3x3 3D convolution with 32 filters. Each green block is a ResNet-like block with the GroupNorm normalization. The output of the segmentation decoder has three channels (with the same spatial size as the input) followed by a sigmoid for segmentation maps of the three tumor subregions. The VAE branch reconstructs the input image into itself, and is used only during training to regularize the shared encoder.

The encoder extracts the features of the images and the decoder reconstructs the dense segmentation masks of an image.
The network was trained on NVIDIA Tesla V100 GPUs and on a DGX-1 server with the cuDNN-accelerated TensorFlow deep learning framework. “During training we used a random crop of size 160x192x128, which ensures that most image content remains within the crop area,” Myronenko said.  “
The BraTS 2018 testing dataset results are 0.7664, 0.8839 and 0.8154 average dice for enhanced tumor core, whole tumor and tumor core, respectively, Myronenko wrote in the paper.
The research is being presented at RSNA’s 104th Scientific and Annual Meeting in Chicago, Illinois.
Read more>

Discuss (1)