Computer Vision / Video Analytics

AI Helps Accelerate the Drug Development Process

University of California, Irvine researchers developed a deep learning-based approach to accelerate drug discovery and cancer research.
“We have developed a convolutional neural network to improve the data analysis processes for high-throughput drug screening using our microphysiological system (MPS),” the researchers stated in their paper.
A microphysiological system is an interconnected set of 2 or 3D cellular constructs that are frequently referred to as organs-on-chips or in vitro organ constructs.
“This network can classify new images near instantaneously and surpasses human accuracy on this task.” the team said. “The accuracy of our best model is significantly better than our minimally-trained human raters and requires no human intervention to operate. This model is a first step toward automation of data analysis for high-throughput drug screening,” the researchers said.
Using NVIDIA TITAN Xp GPUs and the cuDNN-accelerated Keras deep learning framework, the researchers trained their convolutional neural network on thousands of blood vessel images, including some images obtained with data augmentation.
Once trained the system, distinguishes between effective and ineffective drug compounds through automatic analysis of vascularization images.

A set of blood vessel images before (left) and after (right) alignment. The pre-drug-application images are placed in the image’s green channel and the post-drug-application images are placed in the red channel. The separate green and red vessels in the left image shows that the pre- and post-drug- application images are misaligned, the more pervasive yellow in the right image comes from the green and red channels being aligned on top of each other.

For inference, the team used the same GPUs used during training.
“The success of this convolutional model is driven in part by carefully tuning our loss function to discourage false negatives but also by the steps taken to control overfitting in the model,” the team said.  “One regularization strategy was to augment our limited training dataset to virtually infinite size via randomly transforming images during each training pass.”
A pre-print version of the research was recently published on the IEEE/ACM Transactions on Computational Biology and Bioinformatics journal.
Read more >

Discuss (0)