Systematic mapping assesses the nature of an evidence base, answering how much evidence exists on a particular topic. Perhaps the most useful outputs of a systematic map are an interactive database ...of studies and their meta-data, along with visualisations of this database. Despite the rapid increase in systematic mapping as an evidence synthesis method, there is currently a lack of Open Source software for producing interactive visualisations of systematic map databases. In April 2018, as attendees at and coordinators of the first ever Evidence Synthesis Hackathon in Stockholm, we decided to address this issue by developing an R-based tool called EviAtlas, an Open Access (i.e. free to use) and Open Source (i.e. software code is freely accessible and reproducible) tool for producing interactive, attractive tables and figures that summarise the evidence base. Here, we present our tool which includes the ability to generate vital visualisations for systematic maps and reviews as follows: a complete data table; a spatially explicit geographical information system (Evidence Atlas); Heat Maps that cross-tabulate two or more variables and display the number of studies belonging to multiple categories; and standard descriptive plots showing the nature of the evidence base, for example the number of studies published per year or number of studies per country. We believe that EviAtlas will provide a stimulus for the development of other exciting tools to facilitate evidence synthesis.
We report a systematic review and meta-analysis of research using animal models of chemotherapy-induced peripheral neuropathy (CIPN). We systematically searched 5 online databases in September 2012 ...and updated the search in November 2015 using machine learning and text mining to reduce the screening for inclusion workload and improve accuracy. For each comparison, we calculated a standardised mean difference (SMD) effect size, and then combined effects in a random-effects meta-analysis. We assessed the impact of study design factors and reporting of measures to reduce risks of bias. We present power analyses for the most frequently reported behavioural tests; 337 publications were included. Most studies (84%) used male animals only. The most frequently reported outcome measure was evoked limb withdrawal in response to mechanical monofilaments. There was modest reporting of measures to reduce risks of bias. The number of animals required to obtain 80% power with a significance level of 0.05 varied substantially across behavioural tests. In this comprehensive summary of the use of animal models of CIPN, we have identified areas in which the value of preclinical CIPN studies might be increased. Using both sexes of animals in the modelling of CIPN, ensuring that outcome measures align with those most relevant in the clinic, and the animal's pain contextualised ethology will likely improve external validity. Measures to reduce risk of bias should be employed to increase the internal validity of studies. Different outcome measures have different statistical power, and this can refine our approaches in the modelling of CIPN.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Systematic review and meta-analysis of preclinical literature.
To assess the effects of biomaterial-based combination (BMC) strategies for the treatment of Spinal Cord Injury (SCI), the effects of ...individual biomaterials in the context of BMC strategies, and the factors influencing their efficacy. To assess the effects of different preclinical testing paradigms in BMC strategies.
We performed a systematic literature search of Embase, Web of Science and PubMed. All controlled preclinical studies describing an in vivo or in vitro model of SCI that tested a biomaterial in combination with at least one other regenerative strategy (cells, drugs, or both) were included. Two review authors conducted the study selection independently, extracted study characteristics independently and assessed study quality using a modified CAMARADES checklist. Effect size measures were combined using random-effects models and heterogeneity was explored using meta-regression with tau
, I
and R
statistics. We tested for small-study effects using funnel plot-based methods.
134 publications were included, testing over 100 different BMC strategies. Overall, treatment with BMC therapies improved locomotor recovery by 25.3% (95% CI, 20.3-30.3; n = 102) and in vivo axonal regeneration by 1.6 SD (95% CI 1.2-2 SD; n = 117) in comparison with injury only controls.
BMC strategies improve locomotor outcomes after experimental SCI. Our comprehensive study highlights gaps in current knowledge and provides a foundation for the design of future experiments.
BackgroundMeta-analysis of preclinical data is used to evaluate the consistency of findings and to inform the design and conduct of future studies. Unlike clinical meta-analysis, preclinical data ...often involve many heterogeneous studies reporting outcomes from a small number of animals. Here, we review the methodological challenges in preclinical meta-analysis in estimating and explaining heterogeneity in treatment effects.MethodsAssuming aggregate-level data, we focus on two topics: (1) estimation of heterogeneity using commonly used methods in preclinical meta-analysis: method of moments (DerSimonian and Laird; DL), maximum likelihood (restricted maximum likelihood; REML) and Bayesian approach; (2) comparison of univariate versus multivariable meta-regression for adjusting estimated treatment effects for heterogeneity. Using data from a systematic review on the efficacy of interleukin-1 receptor antagonist in animals with stroke, we compare these methods, and explore the impact of multiple covariates on the treatment effects.ResultsWe observed that the three methods for estimating heterogeneity yielded similar estimates for the overall effect, but different estimates for between-study variability. The proportion of heterogeneity explained by a covariate is estimated larger using REML and the Bayesian method as compared with DL. Multivariable meta-regression explains more heterogeneity than univariate meta-regression.ConclusionsOur findings highlight the importance of careful selection of the estimation method and the use of multivariable meta-regression to explain heterogeneity. There was no difference between REML and the Bayesian method and both methods are recommended over DL. Multiple meta-regression is worthwhile to explain heterogeneity by more than one variable, reducing more variability than any univariate models and increasing the explained proportion of heterogeneity.
The ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines are widely endorsed but compliance is limited. We sought to determine whether journal-requested completion of an ARRIVE ...checklist improves full compliance with the guidelines.
In a randomised controlled trial, manuscripts reporting in vivo animal research submitted to PLOS ONE (March-June 2015) were randomly allocated to either requested completion of an ARRIVE checklist or current standard practice. Authors, academic editors, and peer reviewers were blinded to group allocation. Trained reviewers performed outcome adjudication in duplicate by assessing manuscripts against an operationalised version of the ARRIVE guidelines that consists 108 items. Our primary outcome was the between-group differences in the proportion of manuscripts meeting all ARRIVE guideline checklist subitems.
We randomised 1689 manuscripts (control:
= 844, intervention:
= 845), of which 1269 were sent for peer review and 762 (control:
= 340; intervention:
= 332) accepted for publication. No manuscript in either group achieved full compliance with the ARRIVE checklist. Details of animal husbandry (ARRIVE subitem 9b) was the only subitem to show improvements in reporting, with the proportion of compliant manuscripts rising from 52.1 to 74.1% (
= 34.0, df = 1,
= 2.1 × 10
) in the control and intervention groups, respectively.
These results suggest that altering the editorial process to include requests for a completed ARRIVE checklist is not enough to improve compliance with the ARRIVE guidelines. Other approaches, such as more stringent editorial policies or a targeted approach on key quality items, may promote improvements in reporting.
Design of Meta-Analysis Studies Macleod, Malcolm R; Tanriver-Ayder, Ezgi; Hair, Kaitlyn ...
Good Research Practice in Non-Clinical Pharmacology and Biomedicine,
2020, 2020-00-00, Letnik:
257
Book Chapter, Journal Article
Recenzirano
Odprti dostop
Any given research claim can be made with a degree of confidence that a phenomenon is present, with an estimate of the precision of the observed effects and a prediction of the extent to which the ...findings might hold true under different experimental or real-world conditions. In some situations, the certainty and precision obtained from a single study are sufficient reliably to inform future research decisions. However, in other situations greater certainty is required. This might be the case where a substantial research investment is planned, a pivotal claim is to be made or the launch of a clinical trial programme is being considered. Under these circumstances, some form of summary of findings across studies may be helpful.Summary estimates can describe findings from exploratory (observational) or hypothesis testing experiments, but importantly, the creation of such summaries is, in itself, observational rather than experimental research. The process is therefore particularly at risk from selective identification of literature to be included, and this can be addressed using systematic search strategies and pre-specified criteria for inclusion and exclusion against which possible contributing data will be assessed. This characterises a systematic review (in contrast to nonsystematic or narrative reviews). In meta-analysis, there is an attempt to provide a quantitative summary of such research findings.
Throughout the global coronavirus pandemic, we have seen an unprecedented volume of COVID-19 researchpublications. This vast body of evidence continues to grow, making it difficult for research users ...to keep up with the pace of evolving research findings. To enable the synthesis of this evidence for timely use by researchers, policymakers, and other stakeholders, we developed an automated workflow to collect, categorise, and visualise the evidence from primary COVID-19 research studies. We trained a crowd of volunteer reviewers to annotate studies by relevance to COVID-19, study objectives, and methodological approaches. Using these human decisions, we are training machine learning classifiers and applying text-mining tools to continually categorise the findings and evaluate the quality of COVID-19 evidence.