Researchers from the National Institutes of Health in Bethesda, Maryland are using NVIDIA GPUs and deep learning to automatically annotate diseases from chest x-rays.
Accelerated by Tesla GPUs, the team trained their convolutional neural networks on a publicly available radiology dataset of chest x-rays and reports to describe the characteristics of a disease, such as location, severity and the affected organs.
The researchers mention this is the first study (to the best of their knowledge) that mines from a publicly available radiology image and report dataset, not only to classify and detect disease in images, but also to describe their context similar to how a human observer would read.
Read the research paper >>
Detecting and Labeling Diseases in Chest X-Rays with Deep Learning
Apr 14, 2016
Discuss (0)
Related resources
- DLI course: Medical Image Classification Using the MedNIST Data Set
- DLI course: Deep Learning for Industrial Inspection
- DLI course: Image Classification with TensorFlow: Radiomics?€?1p19q Chromosome Status Classification
- GTC session: Mitigating Spurious Correlations for Medical Image Classification via Natural Language Concepts
- GTC session: MONAI Label: AI-Assisted Annotation for Continuous Learning for Radiology, Pathology, and Medical Video Data
- GTC session: Revolutionizing Healthcare through AI-Empowered Solutions and Medical Devices