Abstract Background The performance of the GRACE, HEART and TIMI scores were compared in predicting the probability of major adverse cardiac events (MACE) in chest pain patients presenting at the ...emergency department (ED), in particular their ability to identify patients at low risk. Methods Chest pain patients presenting at the ED in nine Dutch hospitals were included. The primary outcome was MACE within 6 weeks. The HEART score was determined by the treating physician at the ED. The GRACE and TIMI score were calculated based on prospectively collected data. Performance of the scores was compared by calculating AUC curves. Additionally, the number of low-risk patients identified by each score were compared at a fixed level of safety of at least 95% or 98% sensitivity. Results In total, 1748 patients were included. The AUC of GRACE, HEART, and TIMI were 0.73 (95% CI: 0.70–0.76%), 0.86 (95% CI: 0.84–0.88%) and 0.80 (95% CI: 0.78–0.83%), respectively (all differences in AUC highly statistically significant). At an absolute level of safety of at least 98% sensitivity, the GRACE score identified 231 patients as “low risk” in which 2.2% a MACE was missed; the HEART score identified 381 patients as “low risk” with 0.8% missed MACE. The TIMI score identified no “low risk” patients at this safety level. Conclusions The HEART score outperformed the GRACE and TIMI scores in discriminating between those with and without MACE in chest pain patients, and identified the largest group of low-risk patients at the same level of safety.
Cereblon (CRBN), a substrate receptor for the cullin-RING ubiquitin ligase 4 (CRL4) complex, is a direct protein target for thalidomide teratogenicity and antitumor activity of immunomodulatory drugs ...(IMiDs). Here we report that glutamine synthetase (GS) is an endogenous substrate of CRL4CRBN. Upon exposing cells to high glutamine concentration, GS is acetylated at lysines 11 and 14, yielding a degron that is necessary and sufficient for binding and ubiquitylation by CRL4CRBN and degradation by the proteasome. Binding of acetylated degron peptides to CRBN depends on an intact thalidomide-binding pocket but is not competitive with IMiDs. These findings reveal a feedback loop involving CRL4CRBN that adjusts GS protein levels in response to glutamine and uncover a new function for lysine acetylation.
Display omitted
•GS is an endogenous substrate of CRL4CRBN•CRL4CRBN directly mediates the glutamine-induced degradation of GS•Glutamine-stimulated acetylation of lysines 11 and 14 regulates GS degradation•The thalidomide-binding domain of CRBN binds to an acetyllysine degron of GS
Nguyen et al. demonstrate that glutamine induces acetylation of GS at lysines 11 and 14 to create an acetylated degron that binds CRL4CRBN, resulting in ubiquitylation and degradation of GS.
Please cite this paper as: van Leeuwen M, Louwerse M, Opmeer B, Limpens J, Serlie M, Reitsma J, Mol B. Glucose challenge test for detecting gestational diabetes mellitus: a systematic review. BJOG ...2012;119:393–401.
Background The best strategy to identify women with gestational diabetes mellitus (GDM) is unclear.
Objectives To perform a systematic review to calculate summary estimates of the sensitivity and specificity of the 50‐g glucose challenge test for GDM.
Search strategy Systematic search of MEDLINE, EMBASE and Web of Science.
Selection criteria Articles that compared the 50‐g glucose challenge test with the oral glucose tolerance test (OGTT, with a 75‐ or 100‐g reference standard) before 32 weeks of gestation.
Data collection and analysis Summary estimates of sensitivity and specificity, with 95% confidence intervals and summary receiver operating characteristic curves, were calculated using bivariate random‐effects models. Two reviewers independently selected articles that compared the 50 g glucose challenge test to the oral glucose tolerance test (OGTT, 75 or 100 gram, reference standard) before 32 weeks of gestation.
Main results Twenty‐six studies were included (13 564 women). Studies that included women with risk factors showed a pooled sensitivity of the 50‐g glucose challenge test of 0.74 (95% CI 0.62–0.87), a pooled specificity of 0.77 (95% CI 0.66–0.89) (threshold value of 7.8 mmol/l), a derived positive likelihood ratio (LR) of 3.2 (95% CI 2.0–5.2) and a negative LR of 0.34 (95% CI 0.22–0.53). In studies with consecutive recruitment, the pooled sensitivity was 0.74 (95% CI 0.62–0.87) for a specificity of 0.85 (95% CI 0.80–0.91), with a derived positive LR of 4.9 (95% CI 3.5–7.0) and negative LR of 0.31 (95% CI 0.20–0.47). Increasing the threshold for disease (OGTT result) increased the sensitivity of the challenge test, and decreased the specificity.
Author’s conclusions The 50‐g glucose challenge test is acceptable to screen for GDM, but cannot replace the OGTT. Further possibilities of combining the 50‐g glucose challenge test with other screening strategies should be explored.
The Framingham risk models and pooled cohort equations (PCE) are widely used and advocated in guidelines for predicting 10-year risk of developing coronary heart disease (CHD) and cardiovascular ...disease (CVD) in the general population. Over the past few decades, these models have been extensively validated within different populations, which provided mounting evidence that local tailoring is often necessary to obtain accurate predictions. The objective is to systematically review and summarize the predictive performance of three widely advocated cardiovascular risk prediction models (Framingham Wilson 1998, Framingham ATP III 2002 and PCE 2013) in men and women separately, to assess the generalizability of performance across different subgroups and geographical regions, and to determine sources of heterogeneity in the findings across studies.
A search was performed in October 2017 to identify studies investigating the predictive performance of the aforementioned models. Studies were included if they externally validated one or more of the original models in the general population for the same outcome as the original model. We assessed risk of bias for each validation and extracted data on population characteristics and model performance. Performance estimates (observed versus expected (OE) ratio and c-statistic) were summarized using a random effects models and sources of heterogeneity were explored with meta-regression.
The search identified 1585 studies, of which 38 were included, describing a total of 112 external validations. Results indicate that, on average, all models overestimate the 10-year risk of CHD and CVD (pooled OE ratio ranged from 0.58 (95% CI 0.43-0.73; Wilson men) to 0.79 (95% CI 0.60-0.97; ATP III women)). Overestimation was most pronounced for high-risk individuals and European populations. Further, discriminative performance was better in women for all models. There was considerable heterogeneity in the c-statistic between studies, likely due to differences in population characteristics.
The Framingham Wilson, ATP III and PCE discriminate comparably well but all overestimate the risk of developing CVD, especially in higher risk populations. Because the extent of miscalibration substantially varied across settings, we highly recommend that researchers further explore reasons for overprediction and that the models be updated for specific populations.
Aim: To test the extent to which the vertical structure of tropical forests is determined by environment, forest structure or biogeographical history. Location: Pan-tropical. Methods: Using height ...and diameter data from 20,497 trees in 112 non-contiguous plots, asymptotic maximum height (H AM ) and height—diameter relationships were computed with nonlinear mixed effects (NLME) models to: (1) test for environmental and structural causes of differences among plots, and (2) test if there were continental differences once environment and structure were accounted for; persistence of differences may imply the importance of biogeography for vertical forest structure. NLME analyses for floristic subsets of data (only/excluding Fabaceae and only/excluding Dipterocarpaceae individuals) were used to examine whether family-level patterns revealed biogeographical explanations of cross-continental differences. Results: H AM and allometry were significantly different amongst continents. H AM was greatest in Asian forests (58.3 ± 7.5 m, 95% CI), followed by forests in Africa (45.1 ± 2.6 m), America (35.8 ± 6.0 m) and Australia (35.0 ± 7.4 m), and height—diameter relationships varied similarly; for a given diameter, stems were tallest in Asia, followed by Africa, America and Australia. Precipitation seasonality, basal area, stem density, solar radiation and wood density each explained some variation in allometry and H AM yet continental differences persisted even after these were accounted for. Analyses using floristic subsets showed that significant continental differences in H AM and allometry persisted in all cases. Main conclusions: Tree allometry and maximum height are altered by environmental conditions, forest structure and wood density. Yet, even after accounting for these, tropical forest architecture varies significantly from continent to continent. The greater stature of tropical forests in Asia is not directly determined by the dominance of the family Dipterocarpaceae, as on average non-dipterocarps are equally tall. We hypothesise that dominant large-statured families create conditions in which only tall species can compete, thus perpetuating a forest dominated by tall individuals from diverse families.
While the opportunities of ML and AI in healthcare are promising, the growth of complex data-driven prediction models requires careful quality and applicability assessment before they are applied and ...disseminated in daily practice. This scoping review aimed to identify actionable guidance for those closely involved in AI-based prediction model (AIPM) development, evaluation and implementation including software engineers, data scientists, and healthcare professionals and to identify potential gaps in this guidance. We performed a scoping review of the relevant literature providing guidance or quality criteria regarding the development, evaluation, and implementation of AIPMs using a comprehensive multi-stage screening strategy. PubMed, Web of Science, and the ACM Digital Library were searched, and AI experts were consulted. Topics were extracted from the identified literature and summarized across the six phases at the core of this review: (1) data preparation, (2) AIPM development, (3) AIPM validation, (4) software development, (5) AIPM impact assessment, and (6) AIPM implementation into daily healthcare practice. From 2683 unique hits, 72 relevant guidance documents were identified. Substantial guidance was found for data preparation, AIPM development and AIPM validation (phases 1-3), while later phases clearly have received less attention (software development, impact assessment and implementation) in the scientific literature. The six phases of the AIPM development, evaluation and implementation cycle provide a framework for responsible introduction of AI-based prediction models in healthcare. Additional domain and technology specific research may be necessary and more practical experience with implementing AIPMs is needed to support further guidance.
AbstractObjectiveTo review and appraise the validity and usefulness of published and preprint reports of prediction models for prognosis of patients with covid-19, and for detecting people in the ...general population at increased risk of covid-19 infection or being admitted to hospital or dying with the disease.DesignLiving systematic review and critical appraisal by the covid-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group.Data sourcesPubMed and Embase through Ovid, up to 17 February 2021, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020.Study selectionStudies that developed or validated a multivariable covid-19 related prediction model.Data extractionAt least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool).Results126 978 titles were screened, and 412 studies describing 731 new prediction models or validations were included. Of these 731, 125 were diagnostic models (including 75 based on medical imaging) and the remaining 606 were prognostic models for either identifying those at risk of covid-19 in the general population (13 models) or predicting diverse outcomes in those individuals with confirmed covid-19 (593 models). Owing to the widespread availability of diagnostic testing capacity after the summer of 2020, this living review has now focused on the prognostic models. Of these, 29 had low risk of bias, 32 had unclear risk of bias, and 545 had high risk of bias. The most common causes for high risk of bias were inadequate sample sizes (n=408, 67%) and inappropriate or incomplete evaluation of model performance (n=338, 56%). 381 models were newly developed, and 225 were external validations of existing models. The reported C indexes varied between 0.77 and 0.93 in development studies with low risk of bias, and between 0.56 and 0.78 in external validations with low risk of bias. The Qcovid models, the PRIEST score, Carr’s model, the ISARIC4C Deterioration model, and the Xie model showed adequate predictive performance in studies at low risk of bias. Details on all reviewed models are publicly available at https://www.covprecise.org/.ConclusionPrediction models for covid-19 entered the academic literature to support medical decision making at unprecedented speed and in large numbers. Most published prediction model studies were poorly reported and at high risk of bias such that their reported predictive performances are probably optimistic. Models with low risk of bias should be validated before clinical implementation, preferably through collaborative efforts to also allow an investigation of the heterogeneity in their performance across various populations and settings. Methodological guidance, as provided in this paper, should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction modellers should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline.Systematic review registrationProtocol https://osf.io/ehc47/, registration https://osf.io/wy245.Readers’ noteThis article is the final version of a living systematic review that has been updated over the past two years to reflect emerging evidence. This version is update 4 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.
As complete reporting is essential to judge the validity and applicability of multivariable prediction models, a guideline for the Transparent Reporting of a multivariable prediction model for ...Individual Prognosis Or Diagnosis (TRIPOD) was introduced. We assessed the completeness of reporting of prediction model studies published just before the introduction of the TRIPOD statement, to refine and tailor its implementation strategy.
Within each of 37 clinical domains, 10 journals with the highest journal impact factor were selected. A PubMed search was performed to identify prediction model studies published before the launch of TRIPOD in these journals (May 2014). Eligible publications reported on the development or external validation of a multivariable prediction model (either diagnostic or prognostic) or on the incremental value of adding a predictor to an existing model.
We included 146 publications (84% prognostic), from which we assessed 170 models: 73 (43%) on model development, 43 (25%) on external validation, 33 (19%) on incremental value, and 21 (12%) on combined development and external validation of the same model. Overall, publications adhered to a median of 44% (25th-75th percentile 35-52%) of TRIPOD items, with 44% (35-53%) for prognostic and 41% (34-48%) for diagnostic models. TRIPOD items that were completely reported for less than 25% of the models concerned abstract (2%), title (5%), blinding of predictor assessment (6%), comparison of development and validation data (11%), model updating (14%), model performance (14%), model specification (17%), characteristics of participants (21%), model performance measures (methods) (21%), and model-building procedures (24%). Most often reported were TRIPOD items regarding overall interpretation (96%), source of data (95%), and risk groups (90%).
More than half of the items considered essential for transparent reporting were not fully addressed in publications of multivariable prediction model studies. Essential information for using a model in individual risk prediction, i.e. model specifications and model performance, was incomplete for more than 80% of the models. Items that require improved reporting are title, abstract, and model-building procedures, as they are crucial for identification and external validation of prediction models.
Atherothrombosis is a leading cause of cardiovascular mortality and long-term morbidity. Platelets and coagulation proteases, interacting with circulating cells and in different vascular beds, modify ...several complex pathologies including atherosclerosis. In the second Maastricht Consensus Conference on Thrombosis, this theme was addressed by diverse scientists from bench to bedside. All presentations were discussed with audience members and the results of these discussions were incorporated in the final document that presents a state-of-the-art reflection of expert opinions and consensus recommendations regarding the following five topics: 1.
In atherothrombosis research, more focus on the contribution of specific risk factors like ectopic fat needs to be considered; definitions of atherothrombosis are important distinguishing different phases of disease, including plaque (in)stability; proteomic and metabolomics data are to be added to genetic information. 2.
Mechanisms of leukocyte and macrophage plasticity, migration, and transformation in murine atherosclerosis need to be considered; disease mechanism-based biomarkers need to be identified; experimental systems are needed that incorporate whole-blood flow to understand how red blood cells influence thrombus formation and stability; knowledge on platelet heterogeneity and priming conditions needs to be translated toward the in vivo situation. 3.
The role of factor (F) XI in thrombosis including the lower margins of this factor related to safe and effective antithrombotic therapy needs to be established; FXI is a key regulator in linking platelets, thrombin generation, and inflammatory mechanisms in a renin-angiotensin dependent manner; however, the impact on thrombin-dependent PAR signaling needs further study; the fundamental mechanisms in FXIII biology and biochemistry and its impact on thrombus biophysical characteristics need to be explored; the interactions of red cells and fibrin formation and its consequences for thrombus formation and lysis need to be addressed. Platelet-fibrin interactions are pivotal determinants of clot formation and stability with potential therapeutic consequences. 4.
The role of protease-activated receptor (PAR)-4 vis à vis PAR-1 as target for antithrombotic therapy merits study; ongoing trials on platelet function test-based antiplatelet therapy adjustment support development of practically feasible tests; risk scores for patients with atrial fibrillation need refinement, taking new biomarkers including coagulation into account; risk scores that consider organ system differences in bleeding may have added value; all forms of oral anticoagulant treatment require better organization, including education and emergency access; laboratory testing still needs rapidly available sensitive tests with short turnaround time. 5.
Biobanks specifically for thrombus storage and analysis are needed; further studies on novel modified activated protein C-based agents are required including its cytoprotective properties; new avenues for optimizing treatment of patients with ischaemic stroke are needed, also including novel agents that modify fibrinolytic activity (aimed at plasminogen activator inhibitor-1 and thrombin activatable fibrinolysis inhibitor.
Selective digestive decontamination (SDD) and selective oropharyngeal decontamination (SOD) improved intensive care unit (ICU), hospital and 28-day survival in ICUs with low levels of antibiotic ...resistance. Yet it is unclear whether the effect differs between medical and surgical ICU patients.
In an individual patient data meta-analysis, we systematically searched PubMed and included all randomized controlled studies published since 2000. We performed a two-stage meta-analysis with separate logistic regression models per study and per outcome (hospital survival and ICU survival) and subsequent pooling of main and interaction effects.
Six studies, all performed in countries with low levels of antibiotic resistance, yielded 16 528 hospital admissions and 17 884 ICU admissions for complete case analysis. Compared to standard care or placebo, the pooled adjusted odds ratios for hospital mortality was 0.82 (95% confidence interval (CI) 0.72–0.93) for SDD and 0.84 (95% CI 0.73–0.97) for SOD. Compared to SOD, the adjusted odds ratio for hospital mortality was 0.90 (95% CI 0.82–0.97) for SDD. The effects on hospital mortality were not modified by type of ICU admission (p values for interaction terms were 0.66 for SDD and control, 0.87 for SOD and control and 0.47 for SDD and SOD). Similar results were found for ICU mortality.
In ICUs with low levels of antibiotic resistance, the effectiveness of SDD and SOD was not modified by type of ICU admission. SDD and SOD improved hospital and ICU survival compared to standard care in both patient populations, with SDD being more effective than SOD.