Efficacy and Mechanism Evaluation
ISSN / EISSN : 2050-4365 / 2050-4373
Published by: National Institute for Health Research (10.3310)
Total articles ≅ 68
Latest articles in this journal
Published: 1 October 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-46; https://doi.org/10.3310/eme08150
Background Keratoconus is a disease of the cornea affecting vision that is usually first diagnosed in the first three decades. The abnormality of corneal shape and thickness tends to progress until the patient reaches approximately 30 years of age. Epithelium-off corneal cross-linking is a procedure that has been demonstrated to be effective in randomised trials in adults and observational studies in young patients. Objectives The KERALINK trial examined the efficacy and safety of epithelium-off corneal cross-linking, compared with standard care by spectacle or contact lens correction, for stabilisation of progressive keratoconus. Design In this observer-masked, randomised, controlled, parallel-group superiority trial, 60 participants aged 10–16 years with progressive keratoconus were randomised; 58 participants completed the study. Progression was defined as a 1.5 D increase in corneal power measured by maximum or mean power (K2) in the steepest corneal meridian in the study eye, measured by corneal tomography. Setting Referral clinics in four UK hospitals. Interventions Participants were randomised to corneal cross-linking plus standard care or standard care alone, with spectacle or contact lens correction as necessary for vision, and were monitored for 18 months. Main outcome measures The primary outcome was K2 in the study eye as a measure of the steepness of the cornea at 18 months post randomisation. Secondary outcomes included keratoconus progression, visual acuity, keratoconus apex corneal thickness and quality of life. Results Of 60 participants, 30 were randomised to the corneal cross-linking and standard-care groups. Of these, 30 patients in the corneal cross-linking group and 28 patients in the standard-care group were analysed. The mean (standard deviation) K2 in the study eye at 18 months post randomisation was 49.7 D (3.8 D) in the corneal cross-linking group and 53.4 D (5.8 D) in the standard-care group. The adjusted mean difference in K2 in the study eye was –3.0 D (95% confidence interval –4.93 D to –1.08 D; p = 0.002), favouring corneal cross-linking. Uncorrected and corrected differences in logMAR vision at 18 months were better in eyes receiving corneal cross-linking: –0.31 (95% confidence interval –0.50 to –0.11; p = 0.002) and –0.30 (95% confidence interval –0.48 to –0.11; p = 0.002). Keratoconus progression in the study eye occurred in two patients (7%) randomised to corneal cross-linking compared with 12 (43%) patients randomised to standard care. The unadjusted odds ratio suggests that, on average, patients in the corneal cross-linking group had 90% (odds ratio 0.1, 95% confidence interval 0.02 to 0.48; p = 0.004) lower odds of experiencing progression than those receiving standard care. Quality-of-life outcomes were similar in both groups. No adverse events were attributable to corneal cross-linking. Limitations Measurements of K2 in those eyes with the most significant progression were in some cases indicated as suspect by corneal topography device software. Conclusions Corneal cross-linking arrests progression of keratoconus in the great majority of young patients. These data support a consideration of a change in practice, such that corneal cross-linking could be considered as first-line treatment in progressive disease. If the arrest of keratoconus progression induced by corneal cross-linking is sustained in longer follow-up, there may be particular benefit in avoiding the later requirement for contact lens wear or corneal transplantation. However, keratoconus does not continue to progress in all patients receiving standard care. For future work, the most important questions to be answered are whether or not (1) the arrest of keratoconus progression induced by corneal cross-linking is maintained in the long term and (2) the proportion of those receiving standard care who show significant progression increases with time. Trial registration Current Controlled Trials ISRCTN17303768 and EudraCT 2016-001460-11. Funding This project was funded by the Efficacy and Mechanism Evaluation (EME) programme, a Medical Research Council (MRC) and National Institute for Health Research (NIHR) partnership. This will be published in full in Efficacy and Mechanism Evaluation; Vol. 8, No. 15. See the NIHR Journals Library website for further project information. The trial sponsor is University College London. This research was otherwise supported in part by the NIHR Moorfields Biomedical Research Centre and the NIHR Moorfields Clinical Research Facility, London, United Kingdom.
Published: 1 September 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-28; https://doi.org/10.3310/eme08120
Background Women whose pregnancies are affected by hypertensive disorders of pregnancy, in particular preterm pre-eclampsia, are at increased risk of long-term cardiovascular morbidity and mortality. Objectives To investigate the hypothesis that prolongation of a pregnancy affected by preterm pre-eclampsia managed by expectant management compared with planned early delivery would result in worse cardiovascular function 6 months postpartum. Design A randomised controlled trial. Setting 28 maternity hospitals in England and Wales. Participants Women who were eligible for the Pre-eclampsia in HOspital: Early iNductIon or eXpectant management (PHOENIX) study were approached and recruited for the PHOEBE study. The PHOENIX (Pre-eclampsia in HOspital: Early iNductIon or eXpectant management) study was a parallel-group, non-masked, multicentre, randomised controlled trial that was carried out in 46 maternity units across England and Wales. This study compared planned early delivery with expectant management (usual care) with individual randomisation in women with late preterm pre-eclampsia who were 34 weeks’ gestation to less than 37 weeks’ gestation and having a singleton or dichorionic diamniotic twin pregnancy. Interventions Postpartum follow-up included medical history, blood pressure assessment and echocardiography. All women had blood sampling performed on at least two time points from recruitment to the 6-month follow-up for assessment of cardiac necrosis markers. Main outcome measures Primary outcome was a composite of systolic and/or diastolic dysfunction (originally by 2009 guidelines then updated by 2016 guidelines, with an amended definition of diastolic dysfunction). Analyses were by intention to treat, together with a per-protocol analysis for the primary and secondary outcomes. Results Between 27 April 2016 and 30 November 2018, 623 women were found to be eligible, of whom 420 (67%) were recruited across 28 maternity units in England and Wales. A total of 133 women were allocated to planned delivery, 137 women were allocated to expectant management and a further 150 received non-randomised expectant management within usual care. The mean time from enrolment to delivery was 2.5 (standard deviation 1.9) days in the planned delivery group compared with 6.8 (standard deviation 5.3) days in the expectant management group. There were no differences in the primary outcome between women in the planned delivery group and those in the expectant management group using either the 2009 (risk ratio 1.06, 95% confidence interval 0.80 to 1.40) or the 2016 definition (risk ratio 0.78, 95% confidence interval 0.33 to 1.86). Overall, 10% (31/321) of women had a left ventricular ejection fraction < 55% and 71% of the cohort remained hypertensive at 6 months postpartum. No differences were observed between groups in cardiorespiratory outcomes prior to discharge from hospital or in systolic or diastolic blood pressure measurements. Variables associated with the primary outcome (2009 definition) at 6 months postpartum were maternal body mass index (adjusted odds ratio 1.33 per 5 kg/m2, 95% confidence interval 1.12 to 1.59 per 5 kg/m2) and maternal age (adjusted odds ratio 2.16, 95% confidence interval 1.44 to 3.22 per 10 years). Limitations include changing definitions regarding systolic and/or diastolic dysfunction. Conclusions Preterm pre-eclampsia results in persistence of hypertension in the majority of women with late preterm pre-eclampsia at 6 months postpartum and systolic dysfunction in 10%. Pre-eclampsia should not be considered a self-limiting disease of pregnancy alone. Future work Interventions aimed at reducing cardiovascular dysfunction. Trial registration Current Controlled Trials ISRCTN01879376. Funding This project was funded by the Efficacy and Mechanism Evaluation (EME) programme, a Medical Research Council and National Institute for Health Research (NIHR) partnership. This will be published in full in Efficacy and Mechanism Evaluation; Vol. 8, No. 12. See the NIHR Journals Library website for further project information.
Published: 1 September 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-36; https://doi.org/10.3310/eme08130
Background Reliable vascular access is essential for patients receiving haemodialysis. An arteriovenous fistula is the preferred option; however, these are prone to developing stenotic segments. These lesions are treated with angioplasty, but there is a high rate of recurrence. When the PAVE (Paclitaxel-assisted balloon Angioplasty of Venous stenosis in haEmodialysis access) trial was conceived, a number of small studies suggested that restenosis may be reduced by paclitaxel-coated balloons. Objective To test the efficacy of paclitaxel-coated balloons in arteriovenous fistulas. Design A randomised controlled trial. Setting Twenty UK centres. Participants Patients (aged ≥ 18 years) referred with a clinical indication for angioplasty of an arteriovenous fistula (212 patients in total, 106 per group). Interventions High-pressure plain balloon fistuloplasty was performed in all patients. In the intervention arm, the second component was insertion of a paclitaxel-coated balloon. In the control arm, an identical procedure was followed, but using a standard balloon. Main outcome measures The primary end point was time (days) to loss of target lesion primary patency. Secondary patency end points were time to loss of access circuit primary patency and time to loss of access circuit cumulative patency. Other secondary end points included angiographically determined late lumen loss, rate of binary angiographic restenosis, procedural success, number of thrombosis events, fistula interventions, adverse events during follow-up and patient quality of life. Results Primary analysis showed no evidence for a difference in time to end of target lesion primary patency between groups (hazard ratio 1.18, 95% confidence interval 0.78 to 1.79; p = 0.440). An adjusted secondary analysis with prespecified clinical covariates gave similar results (hazard ratio 1.11, 95% confidence interval 0.69 to 1.78; p = 0.664). Prespecified secondary outcomes included the time to intervention anywhere in the access circuit or the time until the fistula was abandoned. There were no differences in these patency-related secondary outcomes or in any other secondary outcomes, such as adverse events. Limitations The PAVE trial was not a fully blinded trial. It was impossible to ensure that treating radiologists were blinded to treatment allocation because of the appearance of the paclitaxel-coated balloon. The extent to which our findings can be generalised to patients with multiple lesions could be questioned, given the proportion randomised. However, if paclitaxel-coated balloons had been effective at a single lesion segment, then there is no plausible reason why they could not be effective in patients with multiple lesions. Conclusions There were no differences in primary or secondary outcomes. Following a plain balloon angioplasty, additional treatment with a paclitaxel-coated balloon does not provide benefit. Future work The reasons for differences between the results of the PAVE trial and of other studies deserve further analysis and consideration. Other interventions to prevent restenosis following a fistuloplasty are needed. Trial registration Current Controlled Trials ISRCTN14284759. Funding This project was funded by the Efficacy and Mechanism Evaluation programme, a Medical Research Council and National Institute for Health Research (NIHR) partnership. This will be published in full in Efficacy and Mechanism Evaluation; Vol. 8, No. 13. See the NIHR Journals Library website for further project information.
Published: 1 August 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-90; https://doi.org/10.3310/eme08110
Background Reasoning may play a causal role in paranoid delusions in psychosis. SlowMo, a new digitally supported cognitive–behavioural therapy, targets reasoning to reduce paranoia. Objectives To examine the effectiveness of SlowMo therapy in reducing paranoia and in improving reasoning, quality of life and well-being, and to examine its mechanisms of action, moderators of effects and acceptability. Design A parallel-arm, assessor-blind, randomised controlled trial comparing SlowMo plus treatment as usual with treatment as usual alone. An online independent system randomised eligible participants (1 : 1) using randomly varying permuted blocks, stratified by site and paranoia severity. Setting Community mental health services in three NHS mental health trusts in England, plus patient identification centres. Participants A total of 362 participants with schizophrenia-spectrum psychosis. Eligibility criteria comprised distressing and persistent (≥ 3 months) paranoia. Interventions Eight face-to-face SlowMo sessions over 12 weeks plus treatment as usual, or treatment as usual alone (control group). Main outcome measures The primary outcome measure was paranoia measured by the Green Paranoid Thoughts Scale and its revised version, together with observer-rated measures of persecutory delusions (The Psychotic Symptom Rating Scales delusion scale and delusion items from the Scale for the Assessment of Positive Symptoms). The secondary outcome measures were reasoning (measures of belief flexibility, jumping to conclusions, and fast and slow thinking), well-being, quality of life, schemas, service use and worry. Results A total of 362 participants were recruited between 1 May 2017 and 14 May 2019: 181 in the SlowMo intervention group and 181 in the treatment-as-usual (control) group. One control participant subsequently withdrew. In total, 325 (90%) participants provided primary Green Paranoid Thoughts Scale outcome data at 12 weeks (SlowMo, n = 162; treatment as usual, n = 163). A total of 145 (80%) participants in the SlowMo group completed all eight therapy sessions. SlowMo was superior to treatment as usual in reducing paranoia on all three measures used: Green Paranoid Thoughts Scale total at 12 weeks (Cohen’s d = 0.30, 95% confidence interval 0.09 to 0.51; p = 0.005) and 24 weeks (Cohen’s d = 0.20, 95% confidence interval –0.02 to 0.40; p = 0.063); Psychotic Symptom Rating Scales delusions at 12 weeks (Cohen’s d = 0.47, 95% confidence interval 0.17 to 0.78; p = 0.002) and 24 weeks (Cohen’s d = 0.50, 95% confidence interval 0.20 to 0.80; p = 0.001); and Scale for the Assessment of Positive Symptoms persecutory delusions at 12 weeks (Cohen’s d = 0.43, 95% confidence interval 0.03 to 0.84; p = 0.035) and 24 weeks (Cohen’s d = 0.54, 95% confidence interval 0.14 to 0.94; p = 0.009). Reasoning (belief flexibility, possibility of being mistaken and Fast and Slow Thinking Questionnaire measure) improved, but jumping to conclusions did not improve. Worry, quality of life, well-being and self-concept also improved, improving most strongly at 24 weeks. Baseline characteristics did not moderate treatment effects. Changes in belief flexibility and worry mediated changes in paranoia. Peer researcher-led qualitative interviews confirmed positive experiences of the therapy and technology. Nineteen participants in the SlowMo group and 21 participants in the treatment-as-usual group reported 54 adverse events (51 serious events, no deaths). Limitations The trial included treatment as usual as the comparator and, thus, the trial design did not control for the effects of time with a therapist. Conclusions To the best of our knowledge, this is the largest trial of a psychological therapy for paranoia in people with psychosis and the first trial using a brief targeted digitally supported therapy. High rates of therapy uptake demonstrated acceptability. It was effective for paranoia, comparable to longer therapy, and equally effective for people with different levels of negative symptoms and working memory. Mediators were improvements in belief flexibility and worry. Our results suggest that targeting reasoning helps paranoia. Future work Further examination of SlowMo mechanisms of action and implementation. Trial registration Current Controlled Trials ISRCTN32448671. Funding This project was funded by the Efficacy and Mechanism Evaluation (EME) programme, a MRC and National Institute for Health Research (NIHR) partnership. This will be published in full in Efficacy and Mechanism Evaluation; Vol. 8, No. 11. See the NIHR Journals Library website for further project information.
Published: 1 July 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-104; https://doi.org/10.3310/eme08100
Background Sepsis and acute respiratory distress syndrome are two heterogeneous acute illnesses with high risk of death and for which there are many ‘statistically negative’ randomised controlled trials. We hypothesised that negative randomised controlled trials occur because of between-participant differences in response to treatment, illness manifestation (phenotype) and risk of outcomes (heterogeneity). Objectives To assess (1) heterogeneity of treatment effect, which tests whether or not treatment effect varies with a patient’s pre-randomisation risk of outcome; and (2) whether or not subphenotypes explain the treatment response differences in sepsis and acute respiratory distress syndrome demonstrated in randomised controlled trials. Study population We performed secondary analysis of two randomised controlled trials in patients with sepsis [i.e. the Vasopressin vs Noradrenaline as Initial Therapy in Septic Shock (VANISH) trial and the Levosimendan for the Prevention of Acute oRgan Dysfunction in Sepsis (LeoPARDS) trial] and one acute respiratory distress syndrome multicentre randomised controlled trial [i.e. the Hydroxymethylglutaryl-CoA reductase inhibition with simvastatin in Acute lung injury to Reduce Pulmonary dysfunction (HARP-2) trial], conducted in the UK. The VANISH trial is a 2 × 2 factorial randomised controlled trial of vasopressin (Pressyn AR®; Ferring Pharmaceuticals, Saint-Prex, Switzerland) and hydrocortisone sodium phosphate (hereafter referred to as hydrocortisone) (EfcortesolTM; Amdipharm plc, St Helier, Jersey) compared with placebo. The LeoPARDS trial is a two-arm-parallel-group randomised controlled trial of levosimendan (Simdax®; Orion Pharma, Espoo, Finland) compared with placebo. The HARP-2 trial is a parallel-group randomised controlled trial of simvastatin compared with placebo. Methods To test for heterogeneity of the effect on 28-day mortality of vasopressin, hydrocortisone and levosimendan in patients with sepsis and of simvastatin in patients with acute respiratory distress syndrome. We used the total Acute Physiology And Chronic Health Evaluation II (APACHE II) score as the baseline risk measurement, comparing treatment effects in patients with baseline APACHE II scores above (high) and below (low) the median using regression models with an interaction between treatment and baseline risk. To identify subphenotypes, we performed latent class analysis using only baseline clinical and biomarker data, and compared clinical outcomes across subphenotypes and treatment groups. Results The odds of death in the highest APACHE II quartile compared with the lowest quartile ranged from 4.9 to 7.4, across the three trials. We did not observe heterogeneity of treatment effect for vasopressin, hydrocortisone and levosimendan. In the HARP-2 trial, simvastatin reduced mortality in the low-APACHE II group and increased mortality in the high-APACHE II group. In the VANISH trial, a two-subphenotype model provided the best fit for the data. Subphenotype 2 individuals had more inflammation and shorter survival. There were no treatment effect differences between the two subphenotypes. In the LeoPARDS trial, a three-subphenotype model provided the best fit for the data. Subphenotype 3 individuals had the greatest inflammation and lowest survival. There were no treatment effect differences between the three subphenotypes, although survival was lowest in the levosimendan group for all subphenotypes. In the HARP-2 trial, a two-subphenotype model provided the best fit for the data. The inflammatory subphenotype was associated with fewer ventilator-free days and higher 28-day mortality. Limitations The lack of heterogeneity of treatment effect and any treatment effect differences between sepsis subphenotypes may be secondary to the lack of statistical power to detect such effects, if they truly exist. Conclusions We highlight lack of heterogeneity of treatment effect in all three trial populations. We report three subphenotypes in sepsis and two subphenotypes in acute respiratory distress syndrome, with an inflammatory phenotype with greater risk of death as a consistent finding in both sepsis and acute respiratory distress syndrome. Future work Our analysis highlights the need to identify key discriminant markers to characterise subphenotypes in sepsis and acute respiratory distress syndrome with an observational cohort study. Funding This project was funded by the Efficacy and Mechanism Evaluation (EME) programme, a MRC and National Institute for Health Research (NIHR) partnership. This will be published in full in Efficacy and Mechanism Evaluation; Vol. 8, No. 10. See the NIHR Journals Library website for further project information.
Published: 1 April 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-128; https://doi.org/10.3310/eme08070
Background Care homes are an increasingly important sector of care. Care home residents are particularly vulnerable to infections and are often prescribed antibiotics, driving antibiotic resistance. Probiotics may be a cheap and safe way to reduce antibiotic use. Efficacy and possible mechanisms of action are yet to be rigorously evaluated in this group. Objective The objective was to evaluate efficacy and explore mechanisms of action of a daily oral probiotic combination in reducing antibiotic use and infections in care home residents. Design This was a multicentre, parallel, individually randomised, placebo-controlled, double-blind trial, with qualitative evaluation and mechanistic studies. Setting A total of 310 care home residents were randomised from 23 UK care homes (from December 2016 to May 2018). Participants The participants were care home residents aged ≥ 65 years who were willing and able to give informed consent or, if they lacked capacity to consent, had a consultee to advise about participation on their behalf. Intervention A daily capsule containing an oral probiotic combination of Lactobacillus rhamnosus GG and Bifidobacterium animalis subsp. lactis BB-12 (n = 155) or matched placebo (n = 155) for up to 1 year. Main outcome measures The primary outcome was cumulative systemic antibiotic administration days for all-cause infections. Secondary outcomes included incidence and duration of infections, antibiotic-associated diarrhoea, quality of life, hospitalisations and the detection of resistant Enterobacterales cultured from stool samples (not exclusively). Methods Participants were randomised (1 : 1) to receive capsules containing probiotic or matched placebo. Minimisation was implemented for recruiting care home and care home resident sex. Care home residents were followed up for 12 months with a review by a research nurse at 3 months and at 6–12 months post randomisation. Care home residents, consultees, care home staff and all members of the trial team, including assessors and statisticians, were blinded to group allocation. Results Care home residents who were randomised to probiotic had a mean 12.9 cumulative systemic antibiotic administration days (standard error 1.49 days) (n = 152) and care home residents randomised to placebo had a mean 12.0 cumulative systemic antibiotic administration days (standard error 1.50 days) (n = 153) (adjusted incidence rate ratio = 1.13, 95% confidence interval 0.79 to 1.63; p = 0.495). There was no evidence of any beneficial effects on incidence and duration of infections, antibiotic-associated diarrhoea, quality of life, hospitalisations, the detection of resistant Enterobacterales cultured from stool samples or other secondary outcomes. There was no evidence that this probiotic combination improved blood immune cell numbers, subtypes or responses to seasonal influenza vaccination. Conclusions Care home residents did not benefit from daily consumption of a combination of the probiotics Lactobacillus rhamnosus GG and Bifidobacterium animalis subsp. lactis BB-12 to reduce antibiotic consumption. Limitations Limitations included the following: truncated follow-up of some participants; higher than expected probiotics in stool samples at baseline; fewer events than expected meant that study power may have been lower than anticipated; standard infection-related definitions were not used; and findings are not necessarily generalisable because effects may be strain specific and could vary according to patient population. Future work Future work could involve further rigorous efficacy, mechanisms and effectiveness trials of other probiotics in other population groups and settings regarding antibiotic use and susceptibility to and recovery from infections, in which potential harms should be carefully studied. Trial registration Current Controlled Trials ISRCTN16392920. Funding This project was funded by the Efficacy and Mechanism Evaluation (EME) programme, a MRC and NIHR partnership. This will be published in full in Efficacy and Mechanism Evaluation; Vol. 8, No. 7. See the NIHR Journals Library website for further project information.
Published: 1 April 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-38; https://doi.org/10.3310/eme08060
Background Observational and pre-clinical studies have reported an association between selenium status, bone density, bone turnover and fracture risk. Selenium is an anti-oxidant, so we hypothesised that selenium could reduce the pro-resorptive action of reactive oxygen species on osteoclasts. Population mortality data suggest that the optimum range for serum selenium is 120–150 µg/l. Most adults in Europe are relatively selenium insufficient compared with adults in the USA and other geographical areas. Objectives The objectives of the study were to determine if selenium supplementation in postmenopausal women with osteopenia decreased bone turnover, improved physical function or decreased markers of oxidative stress and inflammation. Design We conducted a 6-month double-blind, randomised, placebo-controlled trial. Setting This was a single-centre study in Sheffield, UK. Participants We recruited 120 postmenopausal women with osteopenia or osteoporosis. One hundred and fifteen women completed follow-up and were included in the intention-to-treat analysis. Interventions The interventions were sodium selenite as Selenase 200 µg/day, Selenase 50 µg/day (biosyn, Germany) and placebo. Main outcome measures The primary end point was urine N–terminal cross-linking telopeptide of type I collagen/Cr (NTX/Cr) at 26 weeks. Groups were compared with an analysis of covariance, through the use of Hochberg testing. Secondary end points were other biochemical markers of bone turnover, bone mineral density by dual-energy X-ray absorptiometry and physical function scores (short physical performance battery and grip strength). The mechanistic end points were markers of inflammation and anti-oxidant activity (glutathione peroxidase, highly sensitive C-reactive protein and interleukin 6). Results In the 200 µg/day group, mean serum selenium increased from 78.8 µg/l (95% confidence interval 73.5 to 84.2 µg/l) to 105.7 µg/l (95% confidence interval 99.5 to 111.9 µg/l) at 26 weeks. Urine NTX/Cr did not differ between treatment groups at 26 weeks. None of the secondary or mechanistic end-point measurements differed between the treatment groups at 26 weeks. Conclusions We conclude that selenium supplementation at these doses does not affect bone turnover (assessed by NTX/Cr) and is not beneficial for musculoskeletal health in postmenopausal women. Trial registration IRAS 200308, EudraCT 2016-002964-15 and ClinicalTrials.gov NCT02832648. Funding This project was funded by the Efficacy and Mechanism Evaluation (EME) programme, a MRC and National Institute for Health Research (NIHR) partnership. This will be published in full in Efficacy and Mechanism Evaluation; Vol. 8, No. 6. See the NIHR Journals Library website for further project information.
Published: 1 April 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-160; https://doi.org/10.3310/eme08050
Background Tuberculosis (TB) is a devastating disease for which new diagnostic tests are desperately needed. Objective To validate promising new technologies [namely whole-blood transcriptomics, proteomics, flow cytometry and quantitative reverse transcription-polymerase chain reaction (qRT-PCR)] and existing signatures for the detection of active TB in samples obtained from individuals with suspected active TB. Design Four substudies, each of which used samples from the biobank collected as part of the interferon gamma release assay (IGRA) in the Diagnostic Evaluation of Active TB study, which was a prospective cohort of patients recruited with suspected TB. Setting Secondary care. Participants Adults aged ≥ 16 years presenting as inpatients or outpatients at 12 NHS hospital trusts in London, Slough, Oxford, Leicester and Birmingham, with suspected active TB. Interventions New tests using genome-wide gene expression microarray (transcriptomics), surface-enhanced laser desorption ionisation time-of-flight mass spectrometry/liquid chromatography–mass spectrometry (proteomics), flow cytometry or qRT-PCR. Main outcome measures Area under the curve (AUC), sensitivity and specificity were calculated to determine diagnostic accuracy. Positive and negative predictive values were calculated in some cases. A decision tree model was developed to calculate the incremental costs and quality-adjusted life-years of changing from current practice to using the novels tests. Results The project, and four substudies that assessed the previously published signatures, measured each of the new technologies and performed a health economic analysis in which the best-performing tests were evaluated for cost-effectiveness. The diagnostic accuracy of the transcriptomic tests ranged from an AUC of 0.81 to 0.84 for detecting all TB in our cohort. The performance for detecting culture-confirmed TB or pulmonary TB was better than for highly probable TB or extrapulmonary tuberculosis (EPTB), but was not high enough to be clinically useful. None of the previously described serum proteomic signatures for active TB provided good diagnostic accuracy, nor did the candidate rule-out tests. Four out of six previously described cellular immune signatures provided a reasonable level of diagnostic accuracy (AUC = 0.78–0.92) for discriminating all TB from those with other disease and latent TB infection in human immunodeficiency virus-negative TB suspects. Two of these assays may be useful in the IGRA-positive population and can provide high positive predictive value. None of the new tests for TB can be considered cost-effective. Limitations The diagnostic performance of new tests among the HIV-positive population was either underpowered or not sufficiently achieved in each substudy. Conclusions Overall, the diagnostic performance of all previously identified ‘signatures’ of TB was lower than previously reported. This probably reflects the nature of the cohort we used, which includes the harder to diagnose groups, such as culture-unconfirmed TB or EPTB, which were under-represented in previous cohorts. Future work We are yet to achieve our secondary objective of deriving novel signatures of TB using our data sets. This was beyond the scope of this report. We recommend that future studies using these technologies target specific subtypes of TB, specifically those groups for which new diagnostic tests are required. Funding This project was funded by the Efficacy and Mechanism Evaluation (EME) programme, a MRC and NIHR partnership.
Published: 1 March 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-42; https://doi.org/10.3310/eme08040
Background The VEDERA (Very Early vs. Delayed Etanercept in Rheumatoid Arthritis) randomised controlled trial compared the effect of conventional synthetic disease-modifying anti-rheumatic drug (csDMARD) therapy with biologic DMARD (bDMARD) therapy using the tumour necrosis factor inhibitor etanercept in treatment-naive, early rheumatoid arthritis patients. The CADERA (Coronary Artery Disease Evaluation in Rheumatoid Arthritis) trial was a bolt-on study in which VEDERA patients underwent cardiovascular magnetic resonance imaging to detect preclinical cardiovascular disease at baseline and following treatment. Objectives To evaluate whether or not patients with treatment-naive early rheumatoid arthritis have evidence of cardiovascular disease compared with matched control subjects; whether or not this is modifiable with DMARD therapy; and whether or not bDMARDs confer advantages over csDMARDs. Design The VEDERA patients underwent cardiovascular magnetic resonance imaging at baseline and at 1 and 2 years after treatment. Setting The setting was a tertiary centre rheumatology outpatient clinic and specialist cardiovascular magnetic resonance imaging unit. Participants Eighty-one patients completed all assessments at baseline, 71 completed all assessments at 1 year and 56 completed all assessments at 2 years. Patients had no history of cardiovascular disease, had had rheumatoid arthritis symptoms for ≤ 1 year, were DMARD treatment-naive and had a minimum Disease Activity Score-28 of 3.2. Thirty control subjects without cardiovascular disease were approximately individually matched by age and sex to the first 30 CADERA patients. Patients with a Disease Activity Score-28 of ≥ 2.6 at 48 weeks were considered non-responders. Interventions In the VEDERA trial patients were randomised to group 1, immediate etanercept and methotrexate, or group 2, methotrexate ± additional csDMARD therapy in a treat-to-target approach, with a switch to delayed etanercept and methotrexate in the event of failure to achieve clinical remission at 6 months. Main outcome measures The primary outcome measure was difference in baseline aortic distensibility between control subjects and the early rheumatoid arthritis group and the baseline to year 1 change in aortic distensibility in the early rheumatoid arthritis group. Secondary outcome measures were myocardial perfusion reserve, left ventricular strain and twist, left ventricular ejection fraction and left ventricular mass. Results Baseline aortic distensibility [geometric mean (95% confidence interval)] was significantly reduced in patients (n = 81) compared with control subjects (n = 30) [3.0 × 10–3/mmHg (2.7 × 10–3/mmHg to 3.3 × 10–3/mmHg) vs. 4.4 × 10–3/mmHg (3.7 × 10–3/mmHg to 5.2 × 10–3/mmHg), respectively; p < 0.001]. Aortic distensibility [geometric mean (95% confidence interval)] improved significantly from baseline to year 1 across the whole patient cohort (n = 81, with imputation for missing values) [3.0 × 10–3/mmHg (2.7 × 10–3/mmHg to 3.4 × 10–3/mmHg) vs. 3.6 × 10–3/mmHg (3.1 × 10–3/mmHg to 4.1 × 10–3/mmHg), respectively; p < 0.001]. No significant difference in aortic distensibility improvement between baseline and year 1 was seen in the following comparisons (geometric means): group 1 (n = 40 at baseline) versus group 2 (n = 41 at baseline): 3.8 × 10–3/mmHg versus 3.4 × 10–3/mmHg, p = 0.49; combined groups 1 and 2 non-responders (n = 38) versus combined groups 1 and 2 responders (n = 43): 3.5 × 10–3/mmHg versus 3.6 × 10–3/mmHg, p = 0.87; group 1 non-responders (n = 17) versus group 1 responders (n = 23): 3.6 × 10–3/mmHg versus 3.9 × 10–3/mmHg, p = 0.73. There was a trend towards a 10–30% difference in aortic distensibility between (group 1) responders who received first-line etanercept (n = 23) and (group 2) responders who never received etanercept (n = 13): 3.9 × 10–3/mmHg versus 2.8 × 10–3/mmHg, p = 0.19; ratio 0.7 (95% confidence interval 0.4 to 1.2), p = 0.19; ratio adjusted for baseline aortic distensibility 0.8 (95% confidence interval 0.5 to 1.2), p = 0.29; ratio fully adjusted for baseline characteristics 0.9 (95% confidence interval 0.6 to 1.4), p = 0.56. Conclusions The CADERA establishes evidence of the vascular changes in early rheumatoid arthritis compared with controls and shows improvement of vascular changes with rheumatoid arthritis DMARD therapy. Response to rheumatoid arthritis therapy does not add further to modification of cardiovascular disease but, within the response to either strategy, etanercept/methotrexate may confer greater benefits over standard methotrexate/csDMARD therapy. Trial registration Current Controlled Trials ISRCTN89222125 and ClinicalTrials.gov NCT01295151. Funding This project was funded by the Efficacy and Mechanism Evaluation programme, a Medical Research Council and National Institute for Health Research (NIHR) partnership, and will be published in full in Efficacy and Mechanism Evaluation; Vol. 8, No. 4. See the NIHR Journals Library website for further project information. Pfizer Inc. (New York, NY, USA) supported the parent study, VEDERA, through an investigator-sponsored research grant reference WS1092499.
Published: 1 February 2021
Efficacy and Mechanism Evaluation, Volume 8, pp 1-54; https://doi.org/10.3310/eme08030
Background Roux-en-Y gastric bypass is recognised as a standard of care in the treatment of diabetes mellitus and obesity. However, the optimal length of the Roux-en-Y gastric bypass limbs remains controversial, with substantial variation in practice. Specifically, a longer biliopancreatic limb length of 150 cm (‘long limb’) has been hypothesised to be better for the treatment of diabetes mellitus because it increases the postprandial secretion of gut hormones, such as glucagon-like peptide 1, and increases insulin sensitivity, compared with the Roux-en-Y gastric bypass utilising a standard biliopancreatic limb length of 50 cm (‘standard limb’). Objective To evaluate the mechanisms, clinical efficacy and safety of long limb versus the standard limb Roux-en-Y gastric bypass in patients undergoing metabolic surgery for obesity and diabetes mellitus. Design A double-blind, mechanistic randomised controlled trial was conducted to evaluate the mechanisms, clinical efficacy and safety of the two interventions. Setting Imperial College London, King’s College London and their associated NHS trusts. Participants Patients with obesity and type 2 diabetes mellitus who were eligible for metabolic surgery. Interventions Participants were randomly assigned (1 : 1) to 150-cm (long limb) or 50-cm (standard limb) biliopancreatic limb Roux-en-Y gastric bypass with a fixed alimentary limb of 100 cm. The participants underwent meal tolerance tests to measure glucose excursions, glucagon-like peptide 1 and insulin secretion, and hyperinsulinaemic–euglycaemic clamps with stable isotopes to measure insulin sensitivity preoperatively, at 2 weeks after the surgery and at matched 20% total body weight loss. Clinical follow-up continued up to 1 year. Main outcome measures Primary – postprandial peak of active glucagon-like peptide 1 concentration at 2 weeks after intervention. Secondary – fasting and postprandial glucose and insulin concentrations, insulin sensitivity, glycaemic control and weight loss at 12 months after surgery, and safety of participants. Results Of the 53 participants randomised, 48 completed the trial. There were statistically significant decreases in fasting and postprandial glucose concentrations, increases in insulin, glucagon-like peptide 1 secretion and insulin sensitivity, and reductions in the levels of glycated haemoglobin (i.e. HbA1c) and weight in both long and standard limb groups. However, there were no significant differences between trial groups in any of these parameters. Limitations The main limitations of this trial include the relatively short follow-up of 12 months and elongation of the biliopancreatic limb to a fixed length of 150 cm. Conclusion Patients undergoing both types of Roux-en-Y gastric bypass benefited metabolically from the surgery. The results have not demonstrated that elongation of the biliopancreatic limb of the Roux-en-Y gastric bypass from 50 to 150 cm results in superior metabolic outcomes in terms of glucose excursions, insulin and incretin hormone secretion, and insulin sensitivity, when assessed at up to 12 months after surgery. Future work Continued longitudinal follow-up of the long and standard limb cohorts will be necessary to evaluate any differential effects of the two surgical procedures on patients’ metabolic trajectories. Trial registration Current Controlled Trials ISRCTN15283219. Funding This project was funded by the Efficacy and Mechanism Evaluation programme, a Medical Research Council and National Institute for Health Research (NIHR) partnership. This will be published in full in Efficacy and Mechanism Evaluation; Vol. 8, No. 3. See the NIHR Journals Library website for further project information. The section in the report on endocrinology and investigative medicine is funded by grants from the Medical Research Council, the Biotechnology and Biological Sciences Research Council, NIHR, an Integrative Mammalian Biology Capacity Building Award and a FP7-HEALTH-2009-241592 EuroCHIP grant. This section is also supported by the NIHR Biomedical Research Centre Funding Scheme.