Creation and validation of a chest X-ray dataset with eye-tracking and report dictation for AI development
Open Access
- 25 March 2021
- journal article
- research article
- Published by Springer Science and Business Media LLC in Scientific Data
- Vol. 8 (1), 1-18
- https://doi.org/10.1038/s41597-021-00863-5
Abstract
We developed a rich dataset of Chest X-Ray (CXR) images to assist investigators in artificial intelligence. The data were collected using an eye-tracking system while a radiologist reviewed and reported on 1,083 CXR images. The dataset contains the following aligned data: CXR image, transcribed radiology report text, radiologist’s dictation audio and eye gaze coordinates data. We hope this dataset can contribute to various areas of research particularly towards explainable and multimodal deep learning/machine learning methods. Furthermore, investigators in disease classification and localization, automated radiology report generation, and human-machine interaction can benefit from these data. We report deep learning experiments that utilize the attention maps produced by the eye gaze dataset to show the potential utility of this dataset.This publication has 19 references indexed in Scilit:
- A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learningMedical Image Analysis, 2019
- Modeling visual search behavior of breast radiologists using a deep convolution neural networkJournal of Medical Imaging, 2018
- Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based LocalizationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- Cyclical Learning Rates for Training Neural NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiologyAdvances in Health Sciences Education, 2016
- Long Short-Term Memory-Networks for Machine ReadingPublished by Association for Computational Linguistics (ACL) ,2016
- U-Net: Convolutional Networks for Biomedical Image SegmentationPublished by Springer Science and Business Media LLC ,2015
- Investigating the link between radiologists' gaze, diagnostic decision, and image contentJournal of the American Medical Informatics Association, 2013
- Current perspectives in medical image perceptionAttention, Perception, & Psychophysics, 2010
- PhysioBank, PhysioToolkit, and PhysioNetJournal of the American College of Cardiology, 2000