Researchers from the National Institutes of Health in Bethesda, Maryland are using NVIDIA GPUs and deep learning to automatically annotate diseases from chest x-rays.
Accelerated by Tesla GPUs, the team trained their convolutional neural networks on a publicly available radiology dataset of chest x-rays and reports to describe the characteristics of a disease, such as location, severity and the affected organs.
The researchers mention this is the first study (to the best of their knowledge) that mines from a publicly available radiology image and report dataset, not only to classify and detect disease in images, but also to describe their context similar to how a human observer would read.
Read the research paper >>
Detecting and Labeling Diseases in Chest X-Rays with Deep Learning
Apr 14, 2016
Discuss (0)

Related resources
- DLI course: Medical Image Classification Using the MedNIST Data Set
- DLI course: Deep Learning for Industrial Inspection
- GTC session: AI-Assisted Annotation for Continuous Learning with MONAI Label* (Spring 2023)
- GTC session: Deep Dive to Image Generation for Medical Imaging* (Spring 2023)
- GTC session: Posters Spotlight: Healthcare (Spring 2023)
- Webinar: Deep Learning for Medical Imaging: Real-World Challenges and Opportunities