Researchers from the National Institutes of Health in Bethesda, Maryland are using NVIDIA GPUs and deep learning to automatically annotate diseases from chest x-rays.
Accelerated by Tesla GPUs, the team trained their convolutional neural networks on a publicly available radiology dataset of chest x-rays and reports to describe the characteristics of a disease, such as location, severity and the affected organs.
The researchers mention this is the first study (to the best of their knowledge) that mines from a publicly available radiology image and report dataset, not only to classify and detect disease in images, but also to describe their context similar to how a human observer would read.
Read the research paper >>
Detecting and Labeling Diseases in Chest X-Rays with Deep Learning
Apr 14, 2016
Discuss (0)

Related resources
- DLI course: Medical Image Classification Using the MedNIST Data Set
- DLI course: Deep Learning for Industrial Inspection
- GTC session: Incomplete Modality Federated Learning in Medical Imaging
- GTC session: Advanced Medical AI Development with MONAI: From Interactive Annotation to Foundation Models
- GTC session: Harnessing Distributed Intelligence With Privacy Preservation: Federated Learning for the Early Detection of Lung Cancer
- SDK: MONAI Label