An overview of mortality risk prediction in sepsis

Abstract
To review the evolution and development of mortality risk prediction methods as they have been applied to the management of septic patients. Selected relevant articles from the pertinent literature. Theoretical and clinical data on the mortality risk identification, severity of illness scoring systems, and cytokine levels as they relate to mortality in patients with sepsis. All concepts relating to mortality risk prediction, cytokines, severity of illness, and intensive care unit (ICU) mortality were explored and interrelated accordingly. In order to improve the precision of the evaluation of new therapies for the treatment of sepsis, to monitor their utilization and to refine their indications, it has been recommended that mortality risk stratification or severity of illness scoring systems be utilized in clinical trials and in practice. With the increasing influence of managed care on healthcare delivery, there will be an increased demand for techniques to stratify patients for cost-effective allocation of care. Severity of illness scoring systems are widely utilized for patient stratification in the management of cancer and heart disease. However, the use of such systems in patients with sepsis has been limited to applications in clinical trial design for assurance of balance among treatment groups. Mortality risk prediction in sepsis has evolved from identification of risk factors, and simple counts of failing organs, to sophisticated techniques that mathematically transform a raw score, comprised of physiologic and/or clinical data, into a predicted risk of death. Most of the developed systems are based on global ICU populations rather than upon sepsis patient databases. A few, newer systems are derived from such data-bases. However, the overall discriminating ability of the various methods is similar. Mortality prediction has also been carried out from assessments of endotoxin or cytokine (interleukin-1, interleukin-6, tumor necrosis factor) plasma concentrations. While increased levels of these substances have been correlated with increased mortality, difficulties with bioassay and their sporadic appearance in the bloodstream prevent these measurements from being practically applied. The calibration of risk prediction methods comparing predicted with actual mortality across the breadth of risk for a population of patients is excellent, but overall accuracy in individual patient predictions is such that clinical judgment must remain a major part of decision-making. However, as databases of appropriate patient information increase in size and complexity, it may be possible in the future to devise a scoring system that can be relied on to assist in clinical decision-making. Severity of illness scoring systems are widely used in critically ill patients. However, their use in patients with sepsis has largely been limited to a means of stratification in clinical trials. As newer sepsis therapies become available, it may be possible to use such systems for refining their indications, and monitoring their utilization. Finally, as the data-bases supporting the systems increase in size and complexity, it may be possible to utilize them in clinical decision-making.