Journal of Biomedical Informatics

Journal Information
ISSN / EISSN : 1532-0464 / 1532-0480
Published by: Elsevier BV (10.1016)
Total articles ≅ 3,037
Current Coverage
Archived in

Latest articles in this journal

Journal of Biomedical Informatics, Volume 119, pp 103821-103821; doi:10.1016/j.jbi.2021.103821

Rapidly developing AI and machine learning (ML) technologies can expedite therapeutic development and in the time of current pandemic their merits are particularly in focus. The purpose of this study was to explore various ML approaches for molecular property prediction and illustrate their utility for identifying potential SARS-CoV-2 3CLpro inhibitors. We perform a series of drug discovery screenings based on supervised ML models operating in different ways on molecular representations, encompassing shallow learning methods based on fixed molecular fingerprints, Graph Convolutional Neural Network (Graph-CNN) with its self-learned molecular representations, as well as ML methods based on combining fixed and Graph-CNN learned representations. Results of our ML models are compared both with respect to the aggregated predictive performance in terms of ROC-AUC based on the scaffold splits, as well as on the granular level of individual predictions, corresponding to the top ranked repurposing candidates. This comparison reveals both certain characteristic homogeneity regarding chemical and pharmacological classification, with a prevalence of sulfonamides and anticancer drugs, as well as identifies novel groups of potential drug candidates against COVID-19. A series of ML approaches for molecular property prediction enables drug discovery screenings, illustrating the utility for COVID-19. We show that the obtained results correspond well with the already published research on COVID-19 treatment, as well as provide novel insights on potential antiviral characteristics inferred from in vitro data.
Journal of Biomedical Informatics, Volume 119; doi:10.1016/j.jbi.2021.103816

Deep learning based medical image segmentation is an important step within diagnosis, which relies strongly on capturing sufficient spatial context without requiring too complex models that are hard to train with limited labelled data. Training data is in particular scarce for segmenting infection regions of CT images of COVID-19 patients. Attention models help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent criss-cross-attention module aims to approximate global self-attention while remaining memory and time efficient by separating horizontal and vertical self-similarity computations. However, capturing attention from all non-local locations can adversely impact the accuracy of semantic segmentation networks. We propose a new Dynamic Deformable Attention Network (DDANet) that enables a more accurate contextual information computation in a similarly efficient way. Our novel technique is based on a deformable criss-cross attention block that learns both attention coefficients and attention offsets in a continuous way. A deep U-Net (Schlemper et al., 2019) segmentation network that employs this attention mechanism is able to capture attention from pertinent non-local locations and also improves the performance on semantic segmentation tasks compared to criss-cross attention within a U-Net on a challenging COVID-19 lesion segmentation task. Our validation experiments show that the performance gain of the recursively applied dynamic deformable attention blocks comes from their ability to capture dynamic and precise attention context. Our DDANet achieves Dice scores of 73.4% and 61.3% for Ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.9% points compared to a baseline U-Net and 24.4% points compared to current state of art methods (Fan et al., 2020).
Journal of Biomedical Informatics, Volume 119; doi:10.1016/s1532-0464(21)00187-8

, Michael Lau, Guilherme Del Fiol
Journal of Biomedical Informatics, Volume 119; doi:10.1016/j.jbi.2021.103842

Step-up therapy is a patient management approach that aims to balance the efficacy, costs and risks posed by different lines of medications. While the initiation of first line medications is a straightforward decision, stepping-up a patient to the next treatment line is often more challenging and difficult to predict. By identifying patients who are likely to move to the next line of therapy, prediction models could be used to help healthcare organizations with resource planning and chronic disease management. To compared supervised learning versus semi-supervised learning to predict which rheumatoid arthritis patients will move from the first line of therapy (i.e., conventional synthetic disease-modifying antirheumatic drugs) to the next line of therapy (i.e., disease-modifying antirheumatic drugs or targeted synthetic disease-modifying antirheumatic drugs) within one year. Five groups of features were extracted from an administrative claims database: demographics, medications, diagnoses, provider characteristics, and procedures. Then, a variety of supervised and semi-supervised learning methods were implemented to identify the most optimal method of each approach and assess the contribution of each feature group. Finally, error analysis was conducted to understand the behavior of misclassified patients. XGBoost yielded the highest F-measure (42%) among the supervised approaches and one-class support vector machine achieved the highest F-measure (65%) among the semi-supervised approaches. The semi-supervised approach had significantly higher F-measure (65% vs. 42%; p < 0.01), precision (51% vs. 33%; p < 0.01), and recall (89% vs. 59%; p < 0.01) than the supervised approach. Excluding demographic, drug, diagnosis, provider, and procedure features reduced the F-measure from 65% to 61%, 57%, 54%, 51% and 49% respectively (p < 0.01). The error analysis showed that a substantial portion of false positive patients will change their line of therapy shortly after the prediction period. This study showed that supervised learning approaches are not an optimal option for a difficult clinical decision regarding step-up therapy. More specifically, negative class labels in step-up therapy data are not a robust ground truth, because the costs and risks associated with higher line of therapy impact objective decision making of patients and providers. The proposed semi-supervised learning approach can be applied to other step-up therapy applications.
Journal of Biomedical Informatics; doi:10.1016/j.jbi.2021.103862

It has not been long since a new disease called COVID-19 has hit the international community. Unknown nature of the virus, evidence of its adaptability and survival in new conditions, its widespread prevalence and also lengthy recovery period, along with daily notifications of new infection and fatality statistics, have created a wave of fear and anxiety among the public community and authorities. These factors have led to extreme changes in the social discourse in a rather short period of time. The analysis of this discourse is important to reconcile the society and restore ordinary conditions of mental peace and health. Although much research has been done on the disease since its international pandemic, the sociological analysis of the recent public phenomenon, especially in developing countries, still needs attention. We propose a framework for analyzing social media data and news stories oriented around COVID-19 disease. Our research is based on an extensive Persian data set gathered from different social media networks and news agencies in the period of January 21-April 29, 2020. We use the Latent Dirichlet Allocation (LDA) model and dynamic topic modeling to understand and capture the change of discourse in terms of temporal subjects. We scrutinize the reasons of subject alternations by exploring the related events and adopted practices and policies. The social discourse can highly affect the community morale and polarization. Therefore, we further analyze the polarization in online social media posts, and detect points of concept drift in the stream. Based on the analyzed content, effective guidelines are extracted to shift polarization towards positive. The results show that the proposed framework is able to provide an effective practical approach for cause and effect analysis of the social discourse.
Journal of Biomedical Informatics, Volume 119; doi:10.1016/j.jbi.2021.103820

The identification of causal relationships between events or entities within biomedical texts is of great importance for creating scientific knowledge bases and is also a fundamental natural language processing (NLP) task. A causal (cause-effect) relation is defined as an association between two events in which the first must occur before the second. Although this task is an open problem in artificial intelligence, and despite its important role in information extraction from the biomedical literature, very few works have considered this problem. However, with the advent of new techniques in machine learning, especially deep neural networks, research increasingly addresses this problem. This paper summarizes state-of-the-art research, its applications, existing datasets, and remaining challenges. For this survey we have implemented and evaluated various techniques including a Multiview CNN (MVC), attention-based BiLSTM models and state-of-the-art word embedding models, such as those obtained with bidirectional encoder representations (ELMo) and transformer architectures (BioBERT). In addition, we have evaluated a graph LSTM as well as a baseline rule based system. We have investigated the class imbalance problem as an innate property of annotated data in this type of task. The results show that a considerable improvement of the results of state-of-the-art systems can be achieved when a simple random oversampling technique for data augmentation is used in order to reduce class imbalance.
, Chuan Qiu, Hui Shen, Yu-Ping Wang, Hong-Wen Deng
Journal of Biomedical Informatics, Volume 120; doi:10.1016/j.jbi.2021.103854

In recent years, a comprehensive study of complex disease with multi-view datasets (e.g., multi-omics and imaging scans) has been a focus and forefront in biomedical research. State-of-the-art biomedical technologies are enabling us to collect multi-view biomedical datasets for the study of complex diseases. While all the views of data tend to explore complementary information of disease, analysis of multi-view data with complex interactions is challenging for a deeper and holistic understanding of biological systems. In this paper, we propose a novel generalized kernel machine approach to identify higher-order composite effects in multi-view biomedical datasets (GKMAHCE). This generalized semi-parametric (a mixed-effect linear model) approach includes the marginal and joint Hadamard product of features from different views of data. The proposed kernel machine approach considers multi-view data as predictor variables to allow a more thorough and comprehensive modeling of a complex trait. We applied GKMAHCE approach to both synthesized datasets and real multi-view datasets from adolescent brain development and osteoporosis study. Our experiments demonstrate that the proposed method can effectively identify higher-order composite effects and suggest that corresponding features (genes, region of interests, and chemical taxonomies) function in a concerted effort. We show that the proposed method is more generalizable than existing ones. To promote reproducible research, the source code of the proposed method is available at.
Journal of Biomedical Informatics; doi:10.1016/j.jbi.2021.103875

Nowadays, with the digitalization of healthcare systems, huge amounts of clinical narratives are available. However, despite the wealth of information contained in them, interoperability and extraction of relevant information from documents remains a challenge. This work presents an approach towards automatically standardizing Spanish Electronic Discharge Summaries (EDS) following the HL7 Clinical Document Architecture. We address the task of section annotation in EDSs written in Spanish, experimenting with three different approaches, with the aim of boosting interoperability across healthcare systems and hospitals. The paper presents three different methods, ranging from a knowledge-based solution by means of manually constructed rules to supervised Machine Learning approaches, using state of the art algorithms like the Perceptron and transfer learning-based Neural Networks. The paper presents a detailed evaluation of the three approaches on two different hospitals. Overall, the best system obtains a 93.03% F-score for section identification. It is worth mentioning that this result is not completely homogeneous over all section types and hospitals, showing that cross-hospital variability in certain sections is bigger than in others. As a main result, this work proves the feasibility of accurate automatic detection and standardization of section blocks in clinical narratives, opening the way to interoperability and secondary use of clinical data.
, , , , Anand Avati, Andrew Ng, Sanjay Basu, Nigam H. Shah
Journal of Biomedical Informatics, Volume 119; doi:10.1016/j.jbi.2021.103826

Machine learning (ML) models for allocating readmission-mitigating interventions are typically selected according to their discriminative ability, which may not necessarily translate into utility in allocation of resources. Our objective was to determine whether ML models for allocating readmission-mitigating interventions have different usefulness based on their overall utility and discriminative ability. We conducted a retrospective utility analysis of ML models using claims data acquired from the Optum Clinformatics Data Mart, including 513,495 commercially-insured inpatients (mean [SD] age 69 [19] years; 294,895 [57%] Female) over the period January 2016 through January 2017 from all 50 states with mean 90 day cost of $11,552. Utility analysis estimates the cost, in dollars, of allocating interventions for lowering readmission risk based on the reduction in the 90-day cost. Allocating readmission-mitigating interventions based on a GBDT model trained to predict readmissions achieved an estimated utility gain of $104 per patient, and an AUC of 0.76 (95% CI 0.76, 0.77); allocating interventions based on a model trained to predict cost as a proxy achieved a higher utility of $175.94 per patient, and an AUC of 0.62 (95% CI 0.61, 0.62). A hybrid model combining both intervention strategies is comparable with the best models on either metric. Estimated utility varies by intervention cost and efficacy, with each model performing the best under different intervention settings. We demonstrate that machine learning models may be ranked differently based on overall utility and discriminative ability. Machine learning models for allocation of limited health resources should consider directly optimizing for utility.
, Arjuna Scagnetto Data Curatio, Simona Romani, Giulia Barbati
Journal of Biomedical Informatics; doi:10.1016/j.jbi.2021.103876

Interpretability is fundamental in healthcare problems and the lack of it in deep learning models is currently the major barrier in the usage of such powerful algorithms in the field. The study describes the implementation of an attention layer for Long Short-Term Memory (LSTM) neural network that provides a useful picture on the influence of the several input variables included in the model. A cohort of 10,616 patients with cardiovascular diseases is selected from the MIMIC III dataset, an openly available database of electronic health records (EHRs) including all patients admitted to an ICU at Boston’s Medical Centre. For each patient, we consider a 10-length sequence of 1-hour windows in which 48 clinical parameters are extracted to predict the occurrence of death in the next 7 days. Inspired from the recent developments in the field of attention mechanisms for sequential data, we implement a recurrent neural network with LSTM cells incorporating an attention mechanism to identify features driving model’s decisions over time. The performance of the LSTM model, measured in terms of AUC, is 0.790 (SD=0.015). Regard our primary objective, i.e. model interpretability, we investigate the role of attention weights. We find good correspondence with driving predictors of a transparent model (r=0.611, 95% CI [0.395, 0.763]). Moreover, most influential features identified at the cohort-level emerge as known risk factors in the clinical context. Despite the limitations of study dataset, this work brings further evidence of the potential of attention mechanisms in making deep learning model more interpretable and suggests the application of this strategy for the sequential analysis of EHRs.
Back to Top Top