Value in Health

Journal Information
ISSN / EISSN : 1098-3015 / 1524-4733
Published by: Elsevier BV (10.1016)
Total articles ≅ 57,845
Current Coverage
SCOPUS
SCIE
SSCI
MEDICUS
MEDLINE
PUBMED
Archived in
EBSCO
SHERPA/ROMEO
Filter:

Latest articles in this journal

Published: 1 July 2021
Value in Health, Volume 24; doi:10.1016/s1098-3015(21)01570-9

, Saskia de Groot, Matthijs Versteegh, Tim Kanters, Louis Wagner, Jacqueline Ardesch, Werner Brouwer, Job van Exel
Published: 1 July 2021
Value in Health; doi:10.1016/j.jval.2021.05.001

Abstract:
Objectives Cost-effectiveness analyses typically require measurement of health-related quality of life (HRQoL) to estimate quality-adjusted life-years. Challenges with measuring HRQoL arise in the context of episodic conditions if patients are less likely—or even unable—to complete surveys when having disease symptoms. This article explored whether HRQoL measured at regular time intervals adequately reflects the HRQoL of people with epilepsy (PWE). Methods Follow-up data from the Epilepsy Support Dog Evaluation study on the (cost-)effectiveness of seizure dogs were used in which HRQoL is measured in 25 PWE with the EQ-5D at baseline and every 3 months thereafter. Seizure count is recorded daily using a seizure diary. Regression models were employed to explore whether PWE were more likely to complete the HRQoL survey on a good day (ie, when seizures are absent or low in frequency compared with other days) and to provide an estimate of the impact of reporting HRQoL on a good day on EQ-5D utility scores. Results A total of 111 HRQoL measurements were included in the analyses. Regression analyses indicated that the day of reporting HRQoL was associated with a lower seizure count (P<.05) and that a lower seizure count was associated with a higher EQ-5D utility score (P<.05). Conclusions When HRQoL is measured at regular time intervals, PWE seem more likely to complete these surveys on good days. Consequently, HRQoL might be overestimated in this population. This could lead to underestimation of the effectiveness of treatment and to biased estimates of cost-effectiveness.
, Gary P. Jeffrey, Grant A. Ramm, Louisa G. Gordon
Published: 1 July 2021
Value in Health; doi:10.1016/j.jval.2021.04.1286

Abstract:
Objectives Risk-stratified ultrasound screening for hepatocellular carcinoma (HCC), informed by a serum biomarker test, enables resources to be targeted to patients at the highest risk of developing cancer. We aimed to investigate the cost-effectiveness of risk-stratified screening for HCC in the Australian healthcare system. Methods A Markov cohort model was constructed to test 3 scenarios for patients with compensated cirrhosis: (1) risk-stratified screening for high-risk patients, (2) all-inclusive screening, and (3) no formal screening. Probabilistic sensitivity analyses were undertaken to determine the impact of uncertainty. Scenario analyses were used to assess cost-effectiveness in Australia's Aboriginal and Torres Strait Islander peoples and to determine the impact of including productivity-related costs of mortality. Results Both risk-stratified screening and all-inclusive screening programs were cost-effective compared with no formal screening, with incremental cost-effectiveness ratios of A$39 045 and A$23 090 per quality-adjusted life-year (QALY), respectively. All-inclusive screening had an incremental cost-effectiveness ratio of A$4453 compared with risk-stratified screening and had the highest probability of being cost-effective at a willingness-to-pay (WTP) threshold of A$50 000 per QALY. Risk-stratified screening had the highest likelihood of cost-effectiveness when the WTP was between A$25 000 and A$35 000 per QALY. Cost-effectiveness results were further strengthened when applied to an Aboriginal and Torres Strait Islander cohort and when productivity costs were included. Conclusions Cirrhosis population-wide screening for HCC is likely to be cost-effective in Australia. Risk-stratified screening using a serum biomarker test may be cost-effective at lower WTP thresholds.
Published: 1 July 2021
Value in Health, Volume 24; doi:10.1016/s1098-3015(21)01574-6

Ahmed H. Seddik, , Dennis A. Ostwald, Sara Schramm, Jasper Huels, Zaza Katsarava
Published: 1 July 2021
Value in Health; doi:10.1016/j.jval.2021.04.1281

Abstract:
Objectives Migraine is a highly prevalent neurological disorder. The most characteristic symptom of migraine is moderate to severe recurrent headache along with other neurological symptoms. In this study, we modeled the potential reduction in migraine days and corresponding avoided productivity losses if erenumab was prescribed to the patient population indicated for prophylactic migraine treatment (≥ 4 monthly migraine days [MMDs]) in Germany from 2020 to the end of 2027. Methods We simulated the incremental benefits of erenumab against the standard of care. Response rates, transition probabilities, discontinuation rates, and productivity estimates were derived from the erenumab clinical trial program. Patients had a probability of residing in 1 of 7 states, given the MMDs in addition to the probability of death. Based on accrued MMDs in every cycle, days of absenteeism and presenteeism for paid and unpaid work were derived. Paid work was monetized according to gross value added using the human capital approach, whereas unpaid work was valuated according to the proxy good method. In addition, downstream macroeconomic effects were captured using value-added multipliers. Direct medical costs were concomitantly calculated. Results Our results show that prescribing erenumab for the indicated population in Germany could lead to a reduction of 166 million migraine days annually and reduce productivity losses in the range of €27 billion. This includes €13.1 billion from direct productivity and €13.5 billion from economic value chain effects. Conclusions This study highlights the macroeconomic effects of a systematic introduction of novel inhibitors of the calcitonin gene-related peptide pathway for migraine in Germany.
, Shakti Shrestha, Paul Scuffham
Published: 1 July 2021
Value in Health; doi:10.1016/j.jval.2021.04.1276

Abstract:
Objectives Although there is a growing body of evidence suggesting that cannabinoids may relieve symptoms of some illnesses, they are relatively high-cost therapies compared with illicit growth and supply. This article aimed to comprehensively review economic evaluations of medicinal cannabis for alleviating refractory symptoms associated with chronic conditions. Methods Seven electronic databases were searched for articles published up to September 6, 2020. The quality of reporting of economic evaluations was assessed using the Consolidated Health Economic Evaluation Reporting Standards checklist. The extracted data were grouped into subcategories according to types of medical conditions, organized into tables, and reported narratively. Results This review identified 12 cost-utility analyses conducted across a variety of diseases including multiple sclerosis (MS) (N = 8), pediatric drug-resistant epilepsies (N = 2), and chronic pain (N = 2). The incremental cost-effectiveness ratio varied widely from cost saving to more than US$451 800 per quality-adjusted life-year depending on the setting, perspectives, types of medicinal cannabis, and indications. Nabiximols is a cost-effective intervention for MS spasticity in multiple European settings. Cannabidiol was found to be a cost-effective for Dravet syndrome in a Canadian setting whereas a cost-utility analysis conducted in a US setting deemed cannabidiol to be not cost-effective for Lennox-Gastaut syndrome. Overall study quality was good, with publications meeting 70% to 100% (median 83%) of the Consolidated Health Economic Evaluation Reporting Standards checklist criteria. Conclusions Medicinal cannabis-based products may be cost-effective treatment options for MS spasticity, Dravet syndrome, and neuropathic pain, although the literature is nascent. Well-designed clinical trials and health economic evaluations are needed to generate adequate clinical and cost-effectiveness evidence to assist in resource allocation.
, Arnaud Nze Ossima, Marie-Caroline Clément, Morgane Michel, Karine Chevreul
Published: 1 July 2021
Value in Health; doi:10.1016/j.jval.2021.05.004

Abstract:
Objectives This study aimed to evaluate the uncertainty related to the use of common collection tools to assess costs in economic evaluations compared with an exhaustive administrative database. Methods A pragmatic study was performed using preexisting cost-effectiveness studies. Patients were probabilistically matched with themselves in the French National Health Data System (Système National des Données de Santé [SNDS]), and all their reimbursed hospital and ambulatory care data during the study were extracted. Outcomes included the ratio of the number of each type of resources consumed using trial data (case report forms for ambulatory care and local hospital data for hospital care) versus the SNDS and the ratio of corresponding costs. Mean ratios and 95% confidence intervals (CIs) were calculated using bootstrapping. The impact of the collection tool on the result of the economic evaluation was calculated with the difference in costs between the 2 treatment arms with both collection methods. Results Five cost-effectiveness studies were included in the analysis. A total of 397 patients had the SNDS hospital data, and 321 had ambulatory care data. Common collection tools underestimated hospital admissions by 13% (95% CI 8-20), corresponding costs by 5% (95% CI 2-14), and ambulatory acts by 41% (95% CI 33-51), with large variations in costs depending on the study. There was no change in the economic conclusion in any study. Conclusions The use of common collection tools underestimates healthcare resource consumption and its associated costs, particularly for ambulatory care. Our results could provide useful evidence-based estimates to inform sensitivity analyses' parameters in future cost-effectiveness analyses.
, Matt D. Stevenson, Kostas Triantafyllopoulos, Andrea Manca
Published: 1 July 2021
Value in Health; doi:10.1016/j.jval.2021.05.009

Abstract:
Objectives Curative treatments can result in complex hazard functions. The use of standard survival models may result in poor extrapolations. Several models for data which may have a cure fraction are available, but comparisons of their extrapolation performance are lacking. A simulation study was performed to assess the performance of models with and without a cure fraction when fit to data with a cure fraction. Methods Data were simulated from a Weibull cure model, with 9 scenarios corresponding to different lengths of follow-up and sample sizes. Cure and noncure versions of standard parametric, Royston-Parmar, and dynamic survival models were considered along with noncure fractional polynomial and generalized additive models. The mean-squared error and bias in estimates of the hazard function were estimated. Results With the shortest follow-up, none of the cure models provided good extrapolations. Performance improved with increasing follow-up, except for the misspecified standard parametric cure model (lognormal). The performance of the flexible cure models was similar to that of the correctly specified cure model. Accurate estimates of the cured fraction were not necessary for accurate hazard estimates. Models without a cure fraction provided markedly worse extrapolations. Conclusions For curative treatments, failure to model the cured fraction can lead to very poor extrapolations. Cure models provide improved extrapolations, but with immature data there may be insufficient evidence to choose between cure and noncure models, emphasizing the importance of clinical knowledge for model choice. Dynamic cure fraction models were robust to model misspecification, but standard parametric cure models were not.
, Charles Kennergren, Khaldoun G. Tarakji, David J. Wright, Fozia Z. Ahmed, Janet M. McComb, Andreas Goette, Thomas Blum, Mauro Biffi, Michelle Green, et al.
Published: 1 July 2021
Value in Health, Volume 24, pp 930-938; doi:10.1016/j.jval.2020.12.021

Abstract:
Objectives To model the cost-effectiveness of the TYRX Absorbable Antibacterial Envelope when used in patients at increased risk of cardiac implantable electronic device (CIED) infection in the context of 3 European healthcare systems: Germany, Italy, and England. Methods A decision tree model with a lifetime horizon was populated using data from the Worldwide Randomized Antibiotic Envelope Infection Prevention Trial, a large multicenter randomized controlled trial. Use of the antibacterial envelope adjunctive to standard of care was compared to standard of care infection prevention alone. Patients in the model were divided into subgroups based on presence of factors known to increase infection risk. Results The antibacterial envelope had the most favorable cost-effectiveness profile when patients had previously experienced CIED infection, had a history of immunosuppressive therapy, or had a Prevention of Arrhythmia Device Infection Trial (PADIT) score indicating high risk of infection (scores ≥6) at cost-effectiveness thresholds of €50 000 in Germany (assumed in the absence of an official threshold), €40 000 in Italy, and £30 000 in England. Probabilistic sensitivity analysis indicated that the antibacterial envelope was likely to be cost-effective in patients with other risk factors (including replacement of high power CIEDs, generator replacement with lead modification, and PADIT scores indicating intermediate risk of infection) when used with some device types and in some countries. Conclusions The absorbable antibacterial envelope was associated with cost-effectiveness ratios below European benchmarks in selected patients at increased risk of infection, suggesting the envelope provides value for European healthcare systems by reducing CIED infections.
Published: 1 July 2021
Value in Health, Volume 24; doi:10.1016/s1098-3015(21)01573-4

Back to Top Top