81 results found
Di Paolo M, Hewitt L, Nwankwo E, et al., 2021, Erratum to: A retrospective 'real-world' cohort study of azole therapeutic drug monitoring and evolution of antifungal resistance in cystic fibrosis., JAC Antimicrob Resist, Vol: 3
[This corrects the article DOI: 10.1093/jacamr/dlab026.].
Tase A, Buckle P, Ni MZ, et al., 2021, Medical device error and failure reporting: Learning from the car industry, Journal of Patient Safety and Risk Management, Vol: 26, Pages: 135-141, ISSN: 2516-0435
BackgroundImproving the design of technology relies in part, on the reporting of performance failures in existing devices. Healthcare has low levels of formal reporting of performance and failure of medical equipment. This paper examines methods of reporting in the car industry and healthcare and aims to understand differences and identify opportunities for improvement within healthcare.MethodsA literature search was carried out in Pubmed, Medline, Embase, Engineering Village, Scopus. NHS England and MHRA publications and guidelines were also reviewed. Focus was placed on the current system of reporting in both industries, known degree of patient harm, initiating factors, barriers, quality and methods of incident investigation and their validity. The findings were used to compare error reporting system in the two industries.ResultsDerivation of healthcare incident data from different sources means the full extent of patient harm is not known. For example, in 2012 there were 13,549 and 38,395 incidents reported by MHRA and NRLS (National Reporting and Learning System) respectively leading to uncertainties on the extent of the problem. The car industry emphasises the role of reporting source in ensuring data quality. Utilising some aspects of this approach might benefit healthcare reporting. These include a specific reporting system that stresses the importance of organisational learning in improving safety and recognises the limitations of root cause analysis.ConclusionsLearning from reporting systems within the car industry may help the healthcare sector improve its own reporting, aiding healthcare performance.
Huddy JR, Ni MZ, Barlow J, et al., 2021, Qualitative analysis of stakeholder interviews to identify the barriers and facilitators to the adoption of point-of-care diagnostic tests in the UK., BMJ Open, Vol: 11, Pages: 1-9, ISSN: 2044-6055
OBJECTIVES: This study investigated the barriers and facilitators to the adoption of point-of-care tests (POCTs). DESIGN: Qualitative study incorporating a constant comparative analysis of stakeholder responses to a series of interviews undertaken to design the Point-of-Care Key Evidence Tool. SETTING: The study was conducted in relation to POCTs used in all aspects of healthcare. PARTICIPANTS: Forty-three stakeholders were interviewed including clinicians (incorporating laboratory staff and members of trust POCT committees), commissioners, industry, regulators and patients. RESULTS: Thematic analysis highlighted 32 barriers in six themes and 28 facilitators in eight themes to the adoption of POCTs. Six themes were common to both barriers and facilitators (clinical, cultural, evidence, design and quality assurance, financial and organisational) and two themes contained facilitators alone (patient factors and other (non-financial) resource use). CONCLUSIONS: Findings from this study demonstrate the complex motivations of stakeholders in the adoption of POCT. Most themes were common to both barriers and facilitators suggesting that good device design, stakeholder engagement and appropriate evidence provision can increase the likelihood of a POCT device adoption. However, it is important to realise that while the majority of identified barriers may be perceived or mitigated some may be absolute and if identified early in device development further investment should be carefully considered.
Di Paolo M, Hewitt L, Nwanko E, et al., 2021, A retrospective 'real-world' cohort study of azole therapeutic drug monitoring and evolution of antifungal resistance in cystic fibrosis., JAC Antimicrob Resist, Vol: 3
Background: Individuals with cystic fibrosis (CF) have an increased susceptibility to fungal infection/allergy, with triazoles often used as first-line therapy. Therapeutic drug monitoring (TDM) is essential due to significant pharmacokinetic variability and the recent emergence of triazole resistance worldwide. Objectives: In this retrospective study we analysed the 'real-world' TDM of azole therapy in a large CF cohort, risk factors for subtherapeutic dosing, and the emergence of azole resistance. Methods: All adults with CF on azole therapy in a large single UK centre were included. Clinical demographics, TDM and microbiology were analysed over a 2 year study period (2015-17) with multivariate logistic regression used to identify risk factors for subtherapeutic dosing. Results: 91 adults were treated with azole medication during the study period. A high prevalence of chronic subtherapeutic azole dosing was seen with voriconazole (60.8%) and itraconazole capsule (59.6%) use, representing significant risk factors for subtherapeutic levels. Rapid emergence of azole resistance was additionally seen over the follow-up period with a 21.4% probability of CF patients developing a resistant fungal isolate after 2 years. No significant relationship was found however between subtherapeutic azole dosing and azole resistance emergence. Conclusions: Our study demonstrates a high prevalence of subtherapeutic azole levels in CF adults with increased risk using itraconazole capsules and voriconazole therapy. We show rapid emergence of azole resistance highlighting the need for effective antifungal stewardship. Further large longitudinal studies are needed to understand the effects of antifungal resistance on outcome in CF and the implications of subtherapeutic dosing on resistance evolution.
Nwankwo L, McLaren K, Donovan J, et al., 2021, Utilisation of remote capillary blood testing in an outpatient clinic setting to improve shared decision making and patient and clinician experience: a validation and pilot study, BMJ OPEN QUALITY, Vol: 10
Klevebro F, Boshier PR, Savva K, et al., 2020, Severe Dumping Symptoms Are Uncommon Following Transthoracic Esophagectomy But Significantly Decrease Health-Related Quality of Life in Long-Term, Disease-Free Survivors, JOURNAL OF GASTROINTESTINAL SURGERY, Vol: 25, Pages: 1941-1947, ISSN: 1091-255X
Hanna GB, Mackenzie H, Miskovic D, et al., 2020, Laparoscopic colorectal surgery outcomes improved after national training program (LAPCO) for specialists in England, Annals of Surgery, Pages: 1-1, ISSN: 0003-4932
OBJECTIVE: To examine the impact of The National Training Programme for Laparoscopic Colorectal Surgery (Lapco) on the rate of laparoscopic surgery and clinical outcomes of cases performed by Lapco surgeons after completion of training. SUMMERY BACKGROUND DATA: Lapco provided competency-based supervised clinical training for specialist colorectal surgeons in England. METHODS: We compared the rate of laparoscopic surgery, mortality and morbidity for colorectal cancer resections by Lapco delegates and non-Lapco surgeons in 3-year periods preceding and following Lapco using difference in differences analysis. The changes in the rate of post-Lapco laparoscopic surgery with the Lapco sign-off competency assessment and in-training global assessment scores were examined using risk-adjusted cumulative sum to determine their predictive clinical validity with predefined competent scores of 3 and 5 respectively. RESULTS: 108 Lapco delegates performed 4586 elective colorectal resections pre-Lapco and 5115 post-Lapco while non-Lapco surgeons performed 72930 matched cases. Lapco delegates had a 37.8% increase in laparoscopic surgery which was greater than non-Lapco surgeons by 20.9% (95% CI, 18.5 to 23.3, p<0.001) with a relative decrease in 30-day mortality by -1.6% (95% CI, -3.4 to -0.2, p = 0.039) and 90-day mortality by -2.3% (95% CI, -4.3 to -0.4, p = 0.018). The change point of risk-adjusted cumulative sum was 3.12 for competency assessment tool and 4.74 for global assessment score whereas laparoscopic rate increased from 44% to 66% and 40% to 56% respectively. CONCLUSIONS: Lapco increased the rate of laparoscopic colorectal cancer surgery and reduced mortality and morbidity in England. In-training competency assessment tools predicted clinical performance after training.
Nwankwo L, McLaren K, Donovan J, et al., 2020, Utilisation of Remote Capillary Blood Testing in an Outpatient Clinic Setting to improve shared decision making and patient and clinician experience: a validation and pilot study, MEDRxiV
<jats:p>Background In a tertiary respiratory centre, large cohorts of patients are managed in an outpatient setting and require blood tests to monitor disease activity and organ toxicity. This requires either visits to tertiary centres for phlebotomy and physician review or utilisation of primary care services.Objectives This study aims to validate remote capillary blood testing in an outpatient setting and analyse impact on clinical pathways.MethodsA single-centre prospective cross-sectional validation and parallel observational study was performed. Remote finger prick capillary blood testing was validated compared to local standard venesection using comparative statistical analysis: paired t-test, correlation and Bland-Altman. Capillary was considered interchangeable with venous samples if all 3 criteria were met: non-significant paired t-test (i.e. p>0.05), Pearson's correlation coefficient (r) >0.8 and 95% of tests within 10% difference through Bland-Altman (Limits of agreement). In parallel, current clinical pathways including phlebotomy practice was analysed over 4 weeks to review test predictability. A subsequent pilot cohort study analysed potential impact of remote capillary blood sampling on shared decision making and outpatient clinical pathways. Results117 paired capillary and venous blood samples were prospectively analysed. Interchangeability with venous blood was seen with HbA1c (%), total protein and CRP. Further tests, although not interchangeable, are likely useful to enable longitudinal remote monitoring (e.g. liver function, total IgE, and vitamin D). 65% of outpatient clinic blood tests were predictable with 16% of patients requiring further contact due to actions required. Pilot implementation of remote capillary sampling showed patient and clinician-reported improvement in shared decision-making given contemporaneous blood test results.ConclusionsRemote capillary blood sampling can be used accurately for specific tests t
Vijayasingam A, Frost E, Wilkins J, et al., 2020, Tablet and web-based audiometry to screen for hearing loss in adults with cystic fibrosis, Thorax, Vol: 75, Pages: 632-639, ISSN: 0040-6376
INTRODUCTION: Individuals with chronic lung disease (eg, cystic fibrosis (CF)) often receive antimicrobial therapy including aminoglycosides resulting in ototoxicity. Extended high-frequency audiometry has increased sensitivity for ototoxicity detection, but diagnostic audiometry in a sound-booth is costly, time-consuming and requires a trained audiologist. This cross-sectional study analysed tablet-based audiometry (Shoebox MD) performed by non-audiologists in an outpatient setting, alongside home web-based audiometry (3D Tune-In) to screen for hearing loss in adults with CF. METHODS: Hearing was analysed in 126 CF adults using validated questionnaires, a web self-hearing test (0.5 to 4 kHz), tablet (0.25 to 12 kHz) and sound-booth audiometry (0.25 to 12 kHz). A threshold of ≥25 dB hearing loss at ≥1 audiometric frequency was considered abnormal. Demographics and mitochondrial DNA sequencing were used to analyse risk factors, and accuracy and usability of hearing tests determined. RESULTS: Prevalence of hearing loss within any frequency band tested was 48%. Multivariate analysis showed age (OR 1.127; (95% CI: 1.07 to 1.18; p value<0.0001) per year older) and total intravenous antibiotic days over 10 years (OR 1.006; (95% CI: 1.002 to 1.010; p value=0.004) per further intravenous day) were significantly associated with increased risk of hearing loss. Tablet audiometry had good usability, was 93% sensitive, 88% specific with 94% negative predictive value to screen for hearing loss compared with web self-test audiometry and questionnaires which had poor sensitivity (17% and 13%, respectively). Intraclass correlation (ICC) of tablet versus sound-booth audiometry showed high correlation (ICC >0.9) at all frequencies ≥4 kHz. CONCLUSIONS: Adults with CF have a high prevalence of drug-related hearing loss and tablet-based audiometry can be a practical, accurate screening tool within integrated ototoxicity monitoring programmes for early detection.
Markar SR, Ni M, Gisbertz SS, et al., 2020, Implementation of Minimally Invasive Esophagectomy From a Randomized Controlled Trial Setting to National Practice, JOURNAL OF CLINICAL ONCOLOGY, Vol: 38, Pages: 2130-+, ISSN: 0732-183X
Markar SR, Ni M, Mackenzie H, et al., 2020, The effect of time between procedures upon the proficiency gain period for minimally invasive esophagectomy, SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, Vol: 34, Pages: 2703-2708, ISSN: 0930-2794
Harris A, Butterworth J, Boshier PR, et al., 2020, Development of a Reliable Surgical Quality Assurance System for 2-stage Esophagectomy in Randomized Controlled Trials., Ann Surg
OBJECTIVE: The aim was to develop a reliable surgical quality assurance system for 2-stage esophagectomy. This development was conducted during the pilot phase of the multicenter ROMIO trial, collaborating with international experts. SUMMARY OF BACKGROUND DATA: There is evidence that the quality of surgical performance in randomized controlled trials influences clinical outcomes, quality of lymphadenectomy and loco-regional recurrence. METHODS: Standardization of 2-stage esophagectomy was based on structured observations, semi-structured interviews, hierarchical task analysis, and a Delphi consensus process. This standardization provided the structure for the operation manual and video and photographic assessment tools. Reliability was examined using generalizability theory. RESULTS: Hierarchical task analysis for 2-stage esophagectomy comprised fifty-four steps. Consensus (75%) agreement was reached on thirty-nine steps, whereas fifteen steps had a majority decision. An operation manual and record were created. A thirty five-item video assessment tool was developed that assessed the process (safety and efficiency) and quality of the end product (anatomy exposed and lymphadenectomy performed) of the operation. The quality of the end product section was used as a twenty seven-item photographic assessment tool. Thirty-one videos and fifty-three photographic series were submitted from the ROMIO pilot phase for assessment. The overall G-coefficient for the video assessment tool was 0.744, and for the photographic assessment tool was 0.700. CONCLUSIONS: A reliable surgical quality assurance system for 2-stage esophagectomy has been developed for surgical oncology randomized controlled trials. ETHICAL APPROVAL: 11/NW/0895 and confirmed locally as appropriate, 12/SW/0161, 16/SW/0098. TRIAL REGISTRATION NUMBER: ISRCTN59036820, ISRCTN10386621.
Ni M, Borsci S, Walne S, et al., 2020, The Lean and Agile Multi-dimensional Process (LAMP) - a new framework for rapid and iterative evidence generation to support health-care technology design and development, EXPERT REVIEW OF MEDICAL DEVICES, Vol: 17, Pages: 277-288, ISSN: 1743-4440
Subbe CP, Bannard-Smith J, Bunch J, et al., 2019, Quality metrics for the evaluation of Rapid Response Systems: Proceedings from the third international consensus conference on Rapid Response Systems (vol 141, pg 1, 2019), RESUSCITATION, Vol: 145, Pages: 93-94, ISSN: 0300-9572
Subbe CP, Bannard-Smith J, Bunch J, et al., 2019, Quality metrics for the evaluation of Rapid Response Systems: Proceedings from the third international consensus conference on Rapid Response Systems, RESUSCITATION, Vol: 141, Pages: 1-12, ISSN: 0300-9572
Vijayasingam A, Frost E, Wilkins J, et al., 2019, S140 Interim results from a prospective study of tablet and web-based audiometry to detect ototoxicity in adults with cystic fibrosis (vol 73, pg A87, 2018), THORAX, Vol: 74, Pages: 723-723, ISSN: 0040-6376
Huddy JR, Ni M, Misra S, et al., 2019, Development of the Point-of-Care Key Evidence Tool (POCKET): a checklist for multi-dimensional evidence generation in point-of-care tests, Clinical Chemistry and Laboratory Medicine, Vol: 57, Pages: 845-855, ISSN: 1434-6621
BackgroundThis study aimed to develop the Point-of-Care Key Evidence Tool (POCKET); a multi-dimensional checklist to guide the evaluation of point-of-care tests (POCTs) incorporating validity, utility, usability, cost-effectiveness and patient experience. The motivation for this was to improve the efficiency of evidence generation in POCTs and reduce the lead-time for the adoption of novel POCTs.MethodsA mixed qualitative and quantitative approach was applied. Following a literature search, a three round Delphi process was undertaken incorporating a semi-structured interview study and two questionnaire rounds. Participants included clinicians, laboratory personnel, commissioners, regulators (including members of National Institute for Health and Care Excellence [NICE] committees), patients, industry representatives and methodologists. Qualitative data were analysed based on grounded theory. The final tool was revised at an expert stakeholder workshop.ResultsForty-three participants were interviewed within the semi-structured interview study, 32 participated in the questionnaire rounds and nine stakeholders attended the expert workshop. The final version of the POCKET checklist contains 65 different evidence requirements grouped into seven themes. Face validity, content validity and usability has been demonstrated. There exists a shortfall in the evidence that industry and research methodologists believe should be generated regarding POCTs and what is actually required by policy and decision makers to promote implementation into current healthcare pathways.ConclusionsThis study has led to the development of POCKET, a checklist for evidence generation and synthesis in POCTs. This aims to guide industry and researchers to the evidence that is required by decision makers to facilitate POCT adoption so that the benefits they can bring to patients can be effectively realised.
Vijayasingam A, Shah A, Simmonds NJ, et al., 2018, INTERIM RESULTS FROM A PROSPECTIVE STUDY OF TABLET AND WEB-BASED AUDIOMETRY TO DETECT OTOTOXICITY IN ADULTS WITH CYSTIC FIBROSIS, THORAX, Vol: 73, Pages: A87-A88, ISSN: 0040-6376
Vijayasingam A, Shah A, Simmonds NJ, et al., 2018, S140 Interim results from a prospective study of tablet and web- based audiometry to detect ototoxicity in adults with cystic fibrosis, British Thoracic Society Winter Meeting 2018, QEII Centre, Broad Sanctuary, Westminster, London SW1P 3EE, 5 to 7 December 2018, Programme and Abstracts, Publisher: BMJ Publishing Group Ltd and British Thoracic Society
Roberts HW, Wagh VK, Mullens IJM, et al., 2018, Evaluation of a hub-and-spoke model for the delivery of femtosecond laser-assisted cataract surgery within the context of a large randomised controlled trial, British Journal of Ophthalmology, Vol: 102, Pages: 1556-1563, ISSN: 0007-1161
AIMS: To test a hypothesis that cataract operating room (OR) productivity can be improved with a femtosecond laser (FL) using a hub-and-spoke model and whether any increase in productivity can offset additional costs relating to the FL. METHODS: 400 eyes of 400 patients were enrolled in a randomised-controlled trial comparing FL-assisted cataract surgery (FLACS) with conventional phacoemulsification surgery (CPS). 299 of 400 operations were performed on designated high-volume theatre lists (FLACS=134, CPS=165), where a hub-and-spoke FLACS model (1×FL, 2×ORs=2:1) was compared with independent CPS theatre lists. Details of operative timings and OR utilisation were recorded. Differences in productivity between hub-and-spoke FLACS and CPS sessions were compared using an economic model including testing hypothetical 3:1 and 4:1 models. RESULTS: The duration of the operation itself was 12.04±4.89 min for FLACS compared with CPS of 14.54±6.1 min (P<0.001). Total patient time in the OR was reduced from 23.39±6.89 min with CPS to 20.34±5.82 min with FLACS (P<0.001)(reduction of 3.05 min per case). There was no difference in OR turnaround time between the models. Average number of patients treated per theatre list was 9 for FLACS and 8 for CPS. OR utilisation was 92.08% for FLACS and 95.83% for CPS (P<0.001). Using a previously established economic model, the FLACS service cost £144.60 more than CPS per case. This difference would be £131 and £125 for 3:1 and 4:1 models, respectively. CONCLUSION: The FLACS hub-and-spoke model was significantly faster than CPS, with patients spending less time in the OR. This enabled an improvement in productivity, but insufficient to meaningfully offset the additional costs relating to FLACS.
Roberts HW, Myerscough J, Borsci S, et al., 2018, Time and motion studies of National Health Service cataract theatre lists to determine strategies to improve efficiency, British Journal of Ophthalmology, Vol: 102, Pages: 1259-1267, ISSN: 0007-1161
Aim To provide a quantitative assessment of cataract theatre lists focusing on productivity and staffing levels/tasks using time and motion studies.Methods National Health Service (NHS) cataract theatre lists were prospectively observed in five different institutions (four NHS hospitals and one private hospital). Individual tasks and their timings of every member of staff were recorded. Multiple linear regression analyses were performed to investigate possible associations between individual timings and tasks.Results 140 operations were studied over 18 theatre sessions. The median number of scheduled cataract operations was 7 (range: 5–14). The average duration of an operation was 10.3 min±(SD 4.11 min). The average time to complete one case including patient turnaround was 19.97 min (SD 8.77 min). The proportion of the surgeons’ time occupied on total duties or operating ranged from 65.2% to 76.1% and from 42.4% to 56.7%, respectively. The correlations of the surgical time to patient time in theatre was R2=0.95. A multiple linear regression model found a significant association (F(3,111)=32.86, P<0.001) with R2=0.47 between the duration of one operation and the number of allied healthcare professionals (AHPs), the number of AHP key tasks and the time taken to perform these key tasks by the AHPs.Conclusions Significant variability in the number of cases performed and the efficiency of patient flow were found between different institutions. Time and motion studies identified requirements for high-volume models and factors relating to performance. Supporting the surgeon with sufficient AHPs and tasks performed by AHPs could improve surgical efficiency up to approximately double productivity over conventional theatre models.
Borsci S, Uchegbu I, Buckle P, et al., 2017, Designing medical technology for resilience: Integrating health economics and human factors approaches, Expert Review of Medical Devices, Vol: 15, Pages: 15-26, ISSN: 1743-4440
INTRODUCTION: The slow adoption of innovation into healthcare calls into question the manner of evidence generation for medical technology. This paper identifies potential reasons for this including a lack of attention to human factors, poor evaluation of economic benefits, lack of understanding of the existing healthcare system and a failure to recognise the need to generate resilient products. Areas covered. Recognising a cross-disciplinary need to enhance evidence generation early in a technology's life cycle, the present paper proposes a new approach that integrates human factors and health economic evaluation as part of a wider systems approach to the design of technology. This approach (Human and Economic Resilience Design for Medical Technology or HERD MedTech) supports early stages of product development and is based on the recent experiences of the National Institute for Health Research London Diagnostic Evidence Co-operative in the UK. Expert commentary. HERD MedTech i) proposes a shift from design for usability to design for resilience, ii) aspires to reduce the need for service adaptation to technological constraints iii) ensures value of innovation at the time of product development, and iv) aims to stimulate discussion around the integration of pre- and post-market methods of assessment of medical technology.
Shah A, Abdolrasouli A, Schelenz S, et al., 2017, Latent class modelling for pulmonary aspergillosis diagnosis in lung transplant recipients, Winter Meeting of the British-Thoracic-Society, Publisher: BMJ PUBLISHING GROUP, Pages: A13-A14, ISSN: 0040-6376
Rationale Timely, accurate diagnosis of invasive aspergillosis (IA) is key to enable initiation of antifungal therapy in lung transplantation. Despite promising novel fungal biomarkers, the lack of a diagnostic gold-standard creates difficulty in determining utility.Objectives This study aimed to use latent class modelling of fungal diagnostics to classify lung transplant recipients (LTR) with IA in a large single centre.Methods Regression models were used to compare composite biomarker testing of bronchoalveolar lavage to clinical and EORTC-MSG guideline-based diagnosis of IA with mortality used as a surrogate primary outcome measure. Bootstrap analysis identified radiological features associated with IA. Bayesian latent class modelling was used to define IA.Measurements and Main Results A clinical diagnosis of fungal infection (P =<0.001) and composite biomarker positive Results (P =<0.001) had significantly increased 12 month mortality. There was poor correlation between clinical diagnosis, EORTC-based IA diagnosis and composite biomarker positivity. Tracheobronchitis was positively predictive of a clinical and composite biomarker positive diagnosis of IA (p=0.004;95% CI–1.79–21.28 and p=0.03;95% CI–0.85–15.62 respectively). Latent class modelling resulted in the formation of 3 groups: Class 1: likely fungal infection; Class 2: unlikely fungal infection; Class 3: unclassifiable. A. fumigatus PCR was positive in ∼90% of class 1 LTRs compared to only 1% in class 2. Analysis of mortality showed a trend towards significance comparing class 1 with class 2 (p=0.06;HR–4.7;95% CI(0.91–24)) (figure 1).
Borsci S, Buckle P, Huddy J, et al., 2017, Usability study of pH strips for nasogastric tube placement, PLoS ONE, Vol: 12, Pages: 1-14, ISSN: 1932-6203
Aims(1) To model the process of use and usability of pH strips (2) to identify, through simulation studies, the likelihood of misreading pH strips, and to assess professional’s acceptance, trust and perceived usability of pH strips.MethodsThis study was undertaken in four phases and used a mixed method approach (an audit, a semi-structured interview, a survey and simulation study). The three months audit was of 24 patients, the semi-structured interview was performed with 19 health professionals and informed the process of use of pH strips. A survey of 134 professionals and novices explored the likelihood of misinterpreting pH strips. Standardised questionnaires were used to assess professionals perceived usability, trust and acceptance of pH strip use in a simulated study.ResultsThe audit found that in 45.7% of the cases aspiration could not be achieved, and that 54% of the NG-tube insertions required x-ray confirmation. None of those interviewed had received formal training on pH strips use. In the simulated study, participants made up to 11.15% errors in reading the strips with important implications for decision making regarding NG tube placement. No difference was identified between professionals and novices in their likelihood of misinterpreting the pH value of the strips. Whilst the overall experience of usage is poor (47.3%), health professionals gave a positive level of trust in both the interview (62.6%) and the survey (68.7%) and acceptance (interview group 65.1%, survey group 74.7%). They also reported anxiety in the use of strips (interview group 29.7%, survey group 49.7%).ConclusionsSignificant errors occur when using pH strips in a simulated study. Manufacturers should consider developing new pH strips, specifically designed for bedside use, that are more usable and less likely to be misread.
Ni MZ, Huddy JR, Priest OH, et al., 2017, Selecting pH cut-offs for the safe verification of nasogastric feeding tube placement: a decision analytical modelling approach., BMJ Open, Vol: 7, ISSN: 2044-6055
OBJECTIVES: The existing British National Patient Safety Agency (NPSA) safety guideline recommends testing the pH of nasogastric (NG) tube aspirates. Feeding is considered safe if a pH of 5.5 or lower has been observed; otherwise chest X-rays are recommended. Our previous research found that at 5.5, the pH test lacks sensitivity towards oesophageal placements, a major risk identified by feeding experts. The aim of this research is to use a decision analytic modelling approach to systematically assess the safety of the pH test under cut-offs 1-9. MATERIALS AND METHODS: We mapped out the care pathway according to the existing safety guideline where the pH test is used as a first-line test, followed by chest x-rays. Decision outcomes were scored on a 0-100 scale in terms of safety. Sensitivities and specificities of the pH test at each cut-off were extracted from our previous research. Aggregating outcome scores and probabilities resulted in weighted scores which enabled an analysis of the relative safety of the checking procedure under various pH cut-offs. RESULTS: The pH test was the safest under cut-off 5 when there was ≥30% of NG tube misplacements. Under cut-off 5, respiratory feeding was excluded; oesophageal feeding was kept to a minimum to balance the need of chest X-rays for patients with a pH higher than 5. Routine chest X-rays were less safe than the pH test while to feed all without safety checks was the most risky. DISCUSSION: The safety of the current checking procedure is sensitive to the choice of pH cut-offs, the impact of feeding delays, the accuracy of the pH in the oesophagus, as well as the extent of tube misplacements. CONCLUSIONS: The pH test with 5 as the cut-off was the safest overall. It is important to understand the local clinical environment so that appropriate choice of pH cut-offs can be made to maximise safety and to minimise the use of chest X-rays. TRIAL REGISTRATION NUMBER: ISRCTN11170249; Pre-results.
Markar S, Mackenzie H, Ni Z, et al., 2017, The influence of procedural volume and proficiency gain on mortality from upper GI endoscopic mucosal resection, Gut, Vol: 67, Pages: 79-85, ISSN: 1468-3288
ObjectiveEndoscopic mucosal resection (EMR) is established for the management of benign and early malignant upper gastrointestinal disease. The aim of this observational study was to establish the effect of endoscopist procedural volume on mortality.DesignPatients undergoing upper gastrointestinal EMR between 1997 and 2012 were identified from the Hospital Episode Statistics database. The primary outcome was 30-day mortality and secondary outcomes were 90-day mortality, requirement for emergency intervention and elective cancer reintervention. Risk-adjusted Cumulative Sum (RA-CUSUM) analysis was used to assess patient mortality-risk during initial stage of endoscopist proficiency gain and the effect of endoscopist and hospital volume. Mortality was compared before and after the change point or threshold in RA-CUSUM curve.Results11,051 patients underwent upper gastrointestinal EMR. Endoscopist procedure volume was an independent predictor of 30-day mortality. Fifty-eight percent of EMR procedures were performed by endoscopists with annual volume of 2 cases or less, and had a higher 30- and 90-day mortality rate for cancer patients, 6.1% vs. 0.4%; P<0.001 and 12% vs. 2.1%; P<0.001 respectively. The requirement for emergency intervention after EMR for cancer was also greater with low-volume endoscopists (1.8%vs. 0.1%; P=0.002). In cancer patients, the RA-CUSUM curve change-point for 30-day mortality and elective re-intervention was 4 and 43 cases respectively.ConclusionEMR performed by high volume endoscopists is associated with reduced adverse outcomes. In order to reach proficiency, appropriate training and procedural volume accreditation training programmes are needed nationally.
El-Osta A, Woringer M, Pizzo E, et al., 2017, Does use of point of care testing improve cost effectiveness of the NHS Health Checks programme in the primary care setting? A cost minimisation analysis, BMJ Open, Vol: 7, ISSN: 2044-6055
Objective: To determine if use of Point of Care Testing (POCT) is less costly than laboratory testing to the NHS in delivering the NHS Heath Check (NHSHC) programme in the primary care setting Design: Observational study and theoretical mathematical model with micro-costing approachSetting: We collected data on NHSHC delivered at 9 general practices (7 using POCT; 2 not using POCT). Participants: We recruited 9 general practices offering NHSHC, and a Pathology Services Laboratory in the same area. Methods: We conducted mathematical modelling with permutations in the following fields: provider type (HCA or nurse), type of test performed (total cholesterol with either lab fasting glucose or HbA1c), consumables costs and variable uptake rates including rate of non-response to invite letter and rate of missed (DNA) appointments. We calculated Total Expected Cost (TEC) per 100 invites, number of NHSHC conducted per 100 invites and costs for completed NHSHC for laboratory and POCT-based pathways. A univariate and probabilistic sensitivity analysis was conducted to account for uncertainty in the input parameters. Main outcome measures: We collected data on cost, volume and type of pathology services performed at seven general practices using POCT and a Pathology Services Laboratory. We collected data on response to the NHSHC invitation letter and DNA rates from two general practices. Results: TEC of using POCT to deliver a routine NHSHC is lower than the laboratory-led pathway with savings of £29 per 100 invited patients up the point of CVD risk-score presentation. Use of POCT can deliver NHSHC in one sitting, whereas the laboratory pathway offers patients several opportunities to DNA appointment. Conclusions: TEC of using POCT to deliver an NHSHC in the primary care setting is lower than the laboratory-led pat
Borsci S, Buckle P, Uchegbu I, et al., 2017, Integrating human factors and health economics to inform the design of medical device: a conceptual framework, EMBEC & NBC 2017: Joint Conference of the European Medical and Biological Engineering Conference (EMBEC) and the Nordic-Baltic Conference on Biomedical Engineering and Medical Physics (NBC)
Gopalakrishna G, Langendam M, Scholten R, et al., 2017, Erratum to: Methods for evaluating medical tests and biomarkers, Diagnostic and Prognostic Research, Vol: 1, Pages: 11-11, ISSN: 2397-7523
[This corrects the article DOI: 10.1186/s41512-016-0001-y.].
Roberts HW, Ni MZ, O'Brart DPS, 2017, Financial modelling of femtosecond laser-assisted cataract surgery within the National Health Service using a 'hub and spoke' model for the delivery of high-volume cataract surgery, BMJ Open, Vol: 7, ISSN: 2044-6055
Aims To develop financial models which offset additional costs associated with femtosecond laser (FL)-assisted cataract surgery (FLACS) against improvements in productivity and to determine important factors relating to its implementation into the National Health Service (NHS).Methods FL platforms are expensive, in initial purchase and running costs. The additional costs associated with FL technology might be offset by an increase in surgical efficiency. Using a ‘hub and spoke’ model to provide high-volume cataract surgery, we designed a financial model, comparing FLACS against conventional phacoemulsification surgery (CPS). The model was populated with averaged financial data from 4 NHS foundation trusts and 4 commercial organisations manufacturing FL platforms. We tested our model with sensitivity and threshold analyses to allow for variations or uncertainties.Results The averaged weekly workload for cataract surgery using our hub and spoke model required either 8 or 5.4 theatre sessions with CPS or FLACS, respectively. Despite reduced theatre utilisation, CPS (average £433/case) was still found to be 8.7% cheaper than FLACS (average £502/case). The greatest associated cost of FLACS was the patient interface (PI) (average £135/case). Sensitivity analyses demonstrated that FLACS could be less expensive than CPS, but only if increased efficiency, in terms of cataract procedures per theatre list, increased by over 100%, or if the cost of the PI was reduced by almost 70%.Conclusions The financial viability of FLACS within the NHS is currently precluded by the cost of the PI and the lack of knowledge regarding any gains in operational efficiency.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.