Machine-Learning Based Model to Improve Insulin Bolus Calculation in Type 1 Diabetes Therapy

Abstract
Objective: The aim of this work is to propose a new machine-learning based model to improve the calculation of mealtime insulin boluses (MIB) in type 1 diabetes (T1D) therapy using continuous glucose monitoring (CGM) data. Indeed, MIB is still often calculated with the standard formula (SF), which does not account for glucose rate-of-change (ROC), causing critical hypo/hyperglycemic episodes. Methods: Four candidate models for MIB calculation, based on multiple linear regression (MLR) and least absolute shrinkage and selection operator (LASSO) are developed. The proposed models are assessed in silico, using the UVa/Padova T1D simulator, in different mealtime scenarios and compared to the SF and three ROC-accounting variants proposed in the literature. An assessment on real data, by retrospectively analyzing 218 glycemic traces, is also performed. Results: All four tested models performed better than the existing techniques. LASSO regression with extended feature-set including quadratic terms LASSO_Q produced the best results. In silico, LASSO_Q reduced the error in estimating the optimal bolus to only 0.86U (1.45U of SF and 1.36-1.44U of literature methods), as well as hypoglycemia incidence (from 44.41% of SF and 44.60-45.01% of literature methods, to 35.93%). Results are confirmed by the retrospective application to real data. Conclusion: New models to improve MIB calculation accounting for CGM-ROC and easy-to-measure features can be developed within a machine learning framework. Particularly, in this paper, a new LASSO_Q model was developed which ensures better glycemic control than SF and other literature methods. Significance: MIB dosage with the proposed LASSO_Q model can potentially reduce the risk of adverse events in T1D therapy.
Funding Information
  • Italian Minister for Education
  • Dipartimenti di Eccellenza (232/2016)