Journal of the American Medical Informatics Association

Journal Information
ISSN / EISSN : 1067-5027 / 1527-974X
Published by: Oxford University Press (OUP) (10.1093)
Total articles ≅ 3,986
Current Coverage
SCOPUS
LOCKSS
MEDICUS
MEDLINE
PUBMED
PMC
SCIE
SSCI
Archived in
SHERPA/ROMEO
Filter:

Latest articles in this journal

Fanny Sampurno, Christoph Kowalski, Sarah E Connor, Anissa V Nguyen, Àngels Pont Acuña, Chi-Fai Ng, , Günter Feick, Olatz Garin Boronat, Sebastian Dieng, et al.
Journal of the American Medical Informatics Association; https://doi.org/10.1093/jamia/ocab281

Abstract:
Since 2017, the TrueNTH Global Registry (TNGR) has aimed to drive improvement in patient outcomes for individuals with localized prostate cancer by collating data from healthcare institutions across 13 countries. As TNGR matures, a systematic evaluation of existing processes and documents is necessary to evaluate whether the registry is operating as intended. The main supporting documents: protocol and data dictionary, were comprehensively reviewed in a series of meetings over a 10-month period by an international working group. In parallel, individual consultations with local institutions regarding a benchmarking quality-of-care report were conducted. Four consensus areas for improvement emerged: updating operational definitions, appraisal of the recruitment process, refinement of data elements, and improvement of data quality and reporting. Recommendations presented were drawn from our collective experience and accumulated knowledge in operating an international registry. These can be readily generalized to other health-related reporting programs beyond clinical registries.
, Jan A Kors, Solomon Ioannou, Luis H John, Aniek F Markus, Alexandros Rekkas, Maria A J De Ridder, , , Peter R Rijnbeek
Journal of the American Medical Informatics Association; https://doi.org/10.1093/jamia/ocac002

Abstract:
Objectives: This systematic review aims to provide further insights into the conduct and reporting of clinical prediction model development and validation over time. We focus on assessing the reporting of information necessary to enable external validation by other investigators. Materials and Methods: We searched Embase, Medline, Web-of-Science, Cochrane Library, and Google Scholar to identify studies that developed 1 or more multivariable prognostic prediction models using electronic health record (EHR) data published in the period 2009–2019. Results: We identified 422 studies that developed a total of 579 clinical prediction models using EHR data. We observed a steep increase over the years in the number of developed models. The percentage of models externally validated in the same paper remained at around 10%. Throughout 2009–2019, for both the target population and the outcome definitions, code lists were provided for less than 20% of the models. For about half of the models that were developed using regression analysis, the final model was not completely presented. Discussion: Overall, we observed limited improvement over time in the conduct and reporting of clinical prediction model development and validation. In particular, the prediction problem definition was often not clearly reported, and the final model was often not completely presented. Conclusion: Improvement in the reporting of information necessary to enable external validation by other investigators is still urgently needed to increase clinical adoption of developed models.
Aviv Y Landau, Susi Ferrarello, Ashley Blanchard, Kenrick Cato, Nia Atkins, Stephanie Salazar, Desmond U Patton, Maxim Topaz
Journal of the American Medical Informatics Association; https://doi.org/10.1093/jamia/ocab286

Abstract:
Child abuse and neglect are public health issues impacting communities throughout the United States. The broad adoption of electronic health records (EHR) in health care supports the development of machine learning–based models to help identify child abuse and neglect. Employing EHR data for child abuse and neglect detection raises several critical ethical considerations. This article applied a phenomenological approach to discuss and provide recommendations for key ethical issues related to machine learning–based risk models development and evaluation: (1) biases in the data; (2) clinical documentation system design issues; (3) lack of centralized evidence base for child abuse and neglect; (4) lack of “gold standard “in assessment and diagnosis of child abuse and neglect; (5) challenges in evaluation of risk prediction performance; (6) challenges in testing predictive models in practice; and (7) challenges in presentation of machine learning–based prediction to clinicians and patients. We provide recommended solutions to each of the 7 ethical challenges and identify several areas for further policy and research.
Aviv Y Landau, Ashley Blanchard, Kenrick Cato, Nia Atkins, Stephanie Salazar, Desmond U Patton, Maxim Topaz
Journal of the American Medical Informatics Association; https://doi.org/10.1093/jamia/ocab275

Abstract:
Objective: The study provides considerations for generating a phenotype of child abuse and neglect in Emergency Departments (ED) using secondary data from electronic health records (EHR). Implications will be provided for racial bias reduction and the development of further decision support tools to assist in identifying child abuse and neglect. Materials and Methods: We conducted a qualitative study using in-depth interviews with 20 pediatric clinicians working in a single pediatric ED to gain insights about generating an EHR-based phenotype to identify children at risk for abuse and neglect. Results: Three central themes emerged from the interviews: (1) Challenges in diagnosing child abuse and neglect, (2) Health Discipline Differences in Documentation Styles in EHR, and (3) Identification of potential racial bias through documentation. Discussion: Our findings highlight important considerations for generating a phenotype for child abuse and neglect using EHR data. First, information-related challenges include lack of proper previous visit history due to limited information exchanges and scattered documentation within EHRs. Second, there are differences in documentation styles by health disciplines, and clinicians tend to document abuse in different document types within EHRs. Finally, documentation can help identify potential racial bias in suspicion of child abuse and neglect by revealing potential discrepancies in quality of care, and in the language used to document abuse and neglect. Conclusions: Our findings highlight challenges in building an EHR-based risk phenotype for child abuse and neglect. Further research is needed to validate these findings and integrate them into creation of an EHR-based risk phenotype.
A Jay Holmgren, N Lance Downing, Mitchell Tang, Christopher Sharp, Christopher Longhurst, Robert S Huckman
Journal of the American Medical Informatics Association; https://doi.org/10.1093/jamia/ocab288

Abstract:
Journal of the American Medical Informatics Association, ocab268, https://doi.org/10.1093/jamia/ocab268
Elmer V Bernstam, Jeremy L Warner, John C Krauss, Edward Ambinder, Wendy S Rubinstein, George Komatsoulis, Robert S Miller, James L Chen
Journal of the American Medical Informatics Association; https://doi.org/10.1093/jamia/ocab289

Abstract:
Objectives: Electronic health records (EHRs) contain a large quantity of machine-readable data. However, institutions choose different EHR vendors, and the same product may be implemented differently at different sites. Our goal was to quantify the interoperability of real-world EHR implementations with respect to clinically relevant structured data. Materials and Methods: We analyzed de-identified and aggregated data from 68 oncology sites that implemented 1 of 5 EHR vendor products. Using 6 medications and 6 laboratory tests for which well-accepted standards exist, we calculated inter- and intra-EHR vendor interoperability scores. Results: The mean intra-EHR vendor interoperability score was 0.68 as compared to a mean of 0.22 for inter-system interoperability, when weighted by number of systems of each type, and 0.57 and 0.20 when not weighting by number of systems of each type. Discussion: In contrast to data elements required for successful billing, clinically relevant data elements are rarely standardized, even though applicable standards exist. We chose a representative sample of laboratory tests and medications for oncology practices, but our set of data elements should be seen as an example, rather than a definitive list. Conclusions: We defined and demonstrated a quantitative measure of interoperability between site EHR systems and within/between implemented vendor systems. Two sites that share the same vendor are, on average, more interoperable. However, even for implementation of the same EHR product, interoperability is not guaranteed. Our results can inform institutional EHR selection, analysis, and optimization for interoperability.
Siru Liu, , , Charlene Weir, Daniel C Malone, , Keaton Morgan, David ElHalta,
Journal of the American Medical Informatics Association; https://doi.org/10.1093/jamia/ocab292

Abstract:
Objective: To evaluate the potential for machine learning to predict medication alerts that might be ignored by a user, and intelligently filter out those alerts from the user’s view. Materials and Methods: We identified features (eg, patient and provider characteristics) proposed to modulate user responses to medication alerts through the literature; these features were then refined through expert review. Models were developed using rule-based and machine learning techniques (logistic regression, random forest, support vector machine, neural network, and LightGBM). We collected log data on alerts shown to users throughout 2019 at University of Utah Health. We sought to maximize precision while maintaining a false-negative rate <0.01, a threshold predefined through discussion with physicians and pharmacists. We developed models while maintaining a sensitivity of 0.99. Two null hypotheses were developed: H1—there is no difference in precision among prediction models; and H2—the removal of any feature category does not change precision. Results: A total of 3,481,634 medication alerts with 751 features were evaluated. With sensitivity fixed at 0.99, LightGBM achieved the highest precision of 0.192 and less than 0.01 for the pre-defined maximal false-negative rate by subject-matter experts (H1) (P < 0.001). This model could reduce alert volume by 54.1%. We removed different combinations of features (H2) and found that not all features significantly contributed to precision. Removing medication order features (eg, dosage) most significantly decreased precision (−0.147, P = 0.001). Conclusions: Machine learning potentially enables the intelligent filtering of medication alerts.
Back to Top Top