To assess the extent of variation of data quality and completeness of electronic health records and impact on the robustness of risk predictions of incident cardiovascular disease (CVD) using a risk ...prediction tool that is based on routinely collected data (QRISK3).
Longitudinal cohort study.
392 general practices (including 3.6 million patients) linked to hospital admission data.
Variation in data quality was assessed using Sáez’s stability metrics quantifying outlyingness of each practice. Statistical frailty models evaluated whether accuracy of QRISK3 predictions on individual predictions and effects of overall risk factors (linear predictor) varied between practices.
There was substantial heterogeneity between practices in CVD incidence unaccounted for by QRISK3. In the lowest quintile of statistical frailty, a QRISK3 predicted risk of 10 % for female was in a range between 7.1 % and 9.0 % when incorporating practice variability into the statistical frailty models; for the highest quintile, this was 10.9%–16.4%. Data quality (using Saez metrics) and completeness were comparable across different levels of statistical frailty. For example, recording of missing information on ethnicity was 55.7 %, 62.7 %, 57.8 %, 64.8 % and 62.1 % for practices from lowest to highest quintiles of statistical frailty respectively. The effects of risk factors did not vary between practices with little statistical variation of beta coefficients.
The considerable unmeasured heterogeneity in CVD incidence between practices was not explained by variations in data quality or effects of risk factors. QRISK3 risk prediction should be supplemented with clinical judgement and evidence of additional risk factors.
There is limited data around drivers of changes in mortality over time. We aimed to examine the temporal changes in mortality and understand its determinants over time.
743,149 PCI procedures for ...patients from the British Cardiovascular Intervention Society (BCIS) database who were aged between 18 and 100 years and underwent Percutaneous Coronary Intervention (PCI) for Acute Coronary Syndrome (ACS) in England and Wales between 2006 and 2021 were included. We decomposed the contributing factors to the difference in the observed mortality proportions between 2006 and 2021 using Fairlie decomposition method. Multiple imputation was used to address missing data.
Overall, there was an increase in the mortality proportion over time, from 1.7% (95% CI: 1.5% to 1.9%) in 2006 to 3.1% (95% CI: 3.0% to 3.2%) in 2021. 61.2% of this difference was explained by the variables included in the model. ACS subtypes (percentage contribution: 14.67%; 95% CI: 5.76% to 23.59%) and medical history (percentage contribution: 13.50%; 95% CI: 4.33% to 22.67%) were the strongest contributors to the difference in the observed mortality proportions between 2006 and 2021. Also, there were different drivers to mortality changes between different time periods. Specifically, ACS subtypes and severity of presentation were amongst the strongest contributors between 2006 and 2012 while access site and demographics were the strongest contributors between 2012 and 2021.
Patient factors and the move towards ST-elevated myocardial infarction (STEMI) PCI have driven the short-term mortality changes following PCI for ACS the most.
•In-hospital mortality has increased over time from 1.7% in 2006 to 3.1% in 2021.•The move towards STEMI PCI and medical history have driven mortality change.•There are different drivers of the mortality change at different time periods.•Extra attention should be given to patients with STEMI and co-morbidities in the short-term after PCI.•First study to analyse the drivers of hospital mortality after PCI for ACS.
Background The performance of emerging Transcatheter Aortic Valve Implantation (TAVI) clinical prediction models (CPMs) in national TAVI cohorts distinct from those where they have been derived is ...unknown. This study aimed to investigate the performance of the German Aortic Valve, FRANCE-2, OBSERVANT and American College of Cardiology (ACC) TAVI CPMs compared with the performance of historic cardiac CPMs such as the EuroSCORE and STS-PROM, in a large national TAVI registry. Methods The calibration and discrimination of each CPM were analysed in 6676 patients from the UK TAVI registry, as a whole cohort and across several subgroups. Strata included gender, diabetes status, access route and valve type. Furthermore, the amount of agreement in risk classification between each of the considered CPMs was analysed at an individual patient level. Results The observed 30-day mortality rate was 5.4%. In the whole cohort, the majority of CPMs over-estimated the risk of 30-day mortality, although the mean ACC score (5.2%) approximately matched the observed mortality rate. The areas under ROC curve were between 0.57 for OBSERVANT and 0.64 for ACC. Risk classification agreement was low across all models, with Fleiss's kappa values between 0.17 and 0.50. Conclusions Although the FRANCE-2 and ACC models outperformed all other CPMs, the performance of current TAVI-CPMs was low when applied to an independent cohort of TAVI patients. Hence, TAVI specific CPMs need to be derived outside populations previously used for model derivation, either by adapting existing CPMs or developing new risk scores in large national registries.
Distortion product otoacoustic emissions (DPOAEs) evoked by two pure tones carry information about the mechanisms that generate and shape them. Thus, DPOAEs hold promise for providing powerful ...noninvasive diagnostic details of cochlear operations, middle ear (ME) transmission, and impairments. DPOAEs are sensitive to ME function because they are influenced by ME transmission twice, i.e., by the inward-going primary tones in the forward direction and the outward traveling DPOAEs in the reverse direction. However, the effects of ME injuries on DPOAEs have not been systematically characterized. The current study focused on exploring the utility of DPOAEs for examining ME function by methodically characterizing DPOAEs and ME transmission under pathological ME conditions, specifically under conditions of tympanic-membrane (TM) perforation and spontaneous healing.
Results indicated that DPOAEs were measurable with TM perforations up to ∼50%, and DPOAE reductions increased with increasing size of the TM perforation. DPOAE reductions were approximately flat across test frequencies when the TM was perforated about 10% (<1/8 of pars tensa) or less. However, with perforations greater than 10%, DPOAEs decreased further with a low-pass filter shape, with ∼30 dB loss at frequencies below 10 kHz and a quick downward sloping pattern at higher frequencies. The reduction pattern of DPOAEs across frequencies was similar to but much greater than, the directly measured ME pressure gain in the forward direction, which suggested that reduction in the DPOAE was a summation of losses of ME ear transmission in both the forward and reverse directions. Following 50% TM perforations, DPOAEs recovered over a 4-week spontaneously healing interval, and these recoveries were confirmed by improvements in auditory brainstem response (ABR) thresholds. However, up to 4-week post-perforation, DPOAEs never fully recovered to the levels obtained with normal intact TM, consistent with the incomplete recovery of ABR thresholds and ME transmission, especially at high-frequency regions, which could be explained by an irregularly dense and thickened healed TM.
Since TM perforations in patients are commonly caused by either trauma or infection, the present results contribute towards providing insight into understanding ME transmission under pathological conditions as well as promoting the application of DPOAEs in the evaluation and diagnosis of deficits in the ME-transmission system.
The aim of this study was to evaluate national temporal trends in same-day discharge (SDD) and compare clinical outcomes with those among patients admitted for overnight stay undergoing elective ...percutaneous coronary intervention (PCI) for stable angina.
Overnight observation has been the standard of care following PCI, with no previous national analyses around changes in practice or clinical outcomes from health care systems in which SDD is the predominant practice for elective PCI.
Data from 169,623 patients undergoing elective PCI between 2007 and 2014 were obtained from the British Cardiovascular Intervention Society registry. Multiple logistic regressions and the British Cardiovascular Intervention Society risk model were used to study the association between SDD and 30-day mortality.
The rate of SDD increased from 23.5% in 2007 to 57.2% in 2014, with center SDD median prevalence varying from 17% (interquartile range: 6% to 39%) in 2007 to 66% (interquartile range: 45% to 77%) in 2014. The largest independent association with SDD was observed for radial access (odds ratio: 1.69; 95% confidence interval: 1.65 to 1.74; p < 0.001). An increase in 30-day mortality rate over time for the SDD cases was observed, without exceeding the predicted mortality risk. According to the difference-in-differences analysis, observed 30-day mortality temporal changes did not differ between SDD and overnight stay (odds ratio: 1.15; 95% confidence interval: 0.294 to 4.475; p = 0.884).
SDD has become the predominant model of care among elective PCI cases in the United Kingdom, in increasingly complex patients. SDD appears to be safe, with 30-day mortality rates in line with those calculated using the national risk prediction score used for public reporting. Changes toward SDD practice have important economic implications for health care systems worldwide.
Wearable and mobile technology provides new opportunities to manage health conditions remotely and unobtrusively. For example, healthcare providers can repeatedly sample a person's condition to ...monitor progression of symptoms and intervene if necessary. There is usually a utility-tolerability trade-off between collecting information at sufficient frequencies and quantities to be useful, and over-burdening the user or the underlying technology, particularly when active input is required from the user. Selecting the next sampling time adaptively using previous responses, so that people are only sampled at high frequency when necessary, can help to manage this trade-off. We present a novel approach to adaptive sampling using clustered continuous-time hidden Markov models. The model predicts, at any given sampling time, the probability of moving to an 'alert' state, and the next sample time is scheduled when this probability has exceeded a given threshold. The clusters, each representing a distinct sub-model, allow heterogeneity in states and state transitions. The work is illustrated using longitudinal mental-health symptom data in 49 people collected using ClinTouch, a mobile app designed to monitor people with a diagnosis of schizophrenia. Using these data, we show how the adaptive sampling scheme behaves under different model parameters and risk thresholds, and how the average sampling can be substantially reduced whilst maintaining a high sampling frequency during high-risk periods.
AbstractObjectiveTo review and appraise the validity and usefulness of published and preprint reports of prediction models for prognosis of patients with covid-19, and for detecting people in the ...general population at increased risk of covid-19 infection or being admitted to hospital or dying with the disease.DesignLiving systematic review and critical appraisal by the covid-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group.Data sourcesPubMed and Embase through Ovid, up to 17 February 2021, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020.Study selectionStudies that developed or validated a multivariable covid-19 related prediction model.Data extractionAt least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool).Results126 978 titles were screened, and 412 studies describing 731 new prediction models or validations were included. Of these 731, 125 were diagnostic models (including 75 based on medical imaging) and the remaining 606 were prognostic models for either identifying those at risk of covid-19 in the general population (13 models) or predicting diverse outcomes in those individuals with confirmed covid-19 (593 models). Owing to the widespread availability of diagnostic testing capacity after the summer of 2020, this living review has now focused on the prognostic models. Of these, 29 had low risk of bias, 32 had unclear risk of bias, and 545 had high risk of bias. The most common causes for high risk of bias were inadequate sample sizes (n=408, 67%) and inappropriate or incomplete evaluation of model performance (n=338, 56%). 381 models were newly developed, and 225 were external validations of existing models. The reported C indexes varied between 0.77 and 0.93 in development studies with low risk of bias, and between 0.56 and 0.78 in external validations with low risk of bias. The Qcovid models, the PRIEST score, Carr’s model, the ISARIC4C Deterioration model, and the Xie model showed adequate predictive performance in studies at low risk of bias. Details on all reviewed models are publicly available at https://www.covprecise.org/.ConclusionPrediction models for covid-19 entered the academic literature to support medical decision making at unprecedented speed and in large numbers. Most published prediction model studies were poorly reported and at high risk of bias such that their reported predictive performances are probably optimistic. Models with low risk of bias should be validated before clinical implementation, preferably through collaborative efforts to also allow an investigation of the heterogeneity in their performance across various populations and settings. Methodological guidance, as provided in this paper, should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction modellers should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline.Systematic review registrationProtocol https://osf.io/ehc47/, registration https://osf.io/wy245.Readers’ noteThis article is the final version of a living systematic review that has been updated over the past two years to reflect emerging evidence. This version is update 4 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.
It is generally accepted that emissions of nitrogen oxides (NO x ) increase as the volume fraction of biodiesel increases in blends with conventional diesel fuel. While many mechanisms based
on ...biodiesel effects on in-cylinder processes have been proposed to explain this observation, a clear understanding of the
relative importance of each has remained elusive.
To gain further insight into the cause(s) of the biodiesel NO x increase, experiments were conducted in a single-cylinder version of a heavy-duty diesel engine with extensive optical access
to the combustion chamber. The engine was operated using two biodiesel fuels and two hydrocarbon reference fuels, over a wide
range of loads, and using undiluted air as well as air diluted with simulated exhaust gas recirculation. Measurements were
made of cylinder pressure, spatially integrated natural luminosity (a measure of radiative heat transfer), engine-out emissions
of NO x and smoke, flame lift-off length, actual start of injection, ignition delay, and efficiency. Adiabatic flame temperatures
for the test fuels and a surrogate #2 diesel fuel also were computed at representative diesel-engine conditions.
Results suggest that the biodiesel NO x increase is not quantitatively determined by a change in a single fuel property, but rather is the result of a number of
coupled mechanisms whose effects may tend to reinforce or cancel one another under different conditions, depending on specific
combustion and fuel characteristics. Nevertheless, charge-gas mixtures that are closer to stoichiometric at ignition and in
the standing premixed autoignition zone near the flame lift-off length appear to be key factors in helping to explain the
biodiesel NO x increase under all conditions. These differences are expected to lead to higher local and average in-cylinder temperatures,
lower radiative heat losses, and a shorter, more-advanced combustion event, all of which would be expected to increase thermal
NO x emissions. Differences in prompt NO formation and species concentrations resulting from fuel and jet-structure changes also
may play important roles.
In view of the growth of published articles, there is an increasing need for studies that summarize scientific research. An increasingly common review is a “methodology scoping review,” which ...provides a summary of existing analytical methods, techniques and software that have been proposed or applied in research articles to address an analytical problem or further an analytical approach. However, guidelines for their design, implementation, and reporting are limited.
Drawing on the experiences of the authors, which were consolidated through a series of face-to-face workshops, we summarize the challenges inherent in conducting a methodology scoping review and offer suggestions of best practice to promote future guideline development.
We identified three challenges of conducting a methodology scoping review. First, identification of search terms; one cannot usually define the search terms a priori, and the language used for a particular method can vary across the literature. Second, the scope of the review requires careful consideration because new methodology is often not described (in full) within abstracts. Third, many new methods are motivated by a specific clinical question, where the methodology may only be documented in supplementary materials. We formulated several recommendations that build upon existing review guidelines. These recommendations ranged from an iterative approach to defining search terms through to screening and data extraction processes.
Although methodology scoping reviews are an important aspect of research, there is currently a lack of guidelines to standardize their design, implementation, and reporting. We recommend a wider discussion on this topic.
•Reviews that summarize existing analytical methods are a key aspect of research.•Guidelines for the conduct of such “methodology scoping reviews” are limited.•We present several recommendations for conducting methodology scoping reviews.