AbstractObjectiveTo determine the presence of a set of pre-specified traditional and non-traditional criteria used to assess scientists for promotion and tenure in faculties of biomedical sciences ...among universities worldwide.DesignCross sectional study.SettingInternational sample of universities.Participants170 randomly selected universities from the Leiden ranking of world universities list.Main outcome measurePresence of five traditional (for example, number of publications) and seven non-traditional (for example, data sharing) criteria in guidelines for assessing assistant professors, associate professors, and professors and the granting of tenure in institutions with biomedical faculties.ResultsA total of 146 institutions had faculties of biomedical sciences, and 92 had eligible guidelines available for review. Traditional criteria of peer reviewed publications, authorship order, journal impact factor, grant funding, and national or international reputation were mentioned in 95% (n=87), 37% (34), 28% (26), 67% (62), and 48% (44) of the guidelines, respectively. Conversely, among non-traditional criteria, only citations (any mention in 26%; n=24) and accommodations for employment leave (37%; 34) were relatively commonly mentioned. Mention of alternative metrics for sharing research (3%; n=3) and data sharing (1%; 1) was rare, and three criteria (publishing in open access mediums, registering research, and adhering to reporting guidelines) were not found in any guidelines reviewed. Among guidelines for assessing promotion to full professor, traditional criteria were more commonly reported than non-traditional criteria (traditional criteria 54.2%, non-traditional items 9.5%; mean difference 44.8%, 95% confidence interval 39.6% to 50.0%; P=0.001). Notable differences were observed across continents in whether guidelines were accessible (Australia 100% (6/6), North America 97% (28/29), Europe 50% (27/54), Asia 58% (29/50), South America 17% (1/6)), with more subtle differences in the use of specific criteria.ConclusionsThis study shows that the evaluation of scientists emphasises traditional criteria as opposed to non-traditional criteria. This may reinforce research practices that are known to be problematic while insufficiently supporting the conduct of better quality research and open science. Institutions should consider incentivising non-traditional criteria.Study registrationOpen Science Framework (https://osf.io/26ucp/?view_only=b80d2bc7416543639f577c1b8f756e44).
Randomised controlled trials are increasingly conducted as embedded, nested, or using cohorts or routinely collected data, including registries, electronic health records, and administrative ...databases, to assess if participants are eligible for the trial and to facilitate recruitment, to deliver an embedded intervention, to collect trial outcome data, or a combination of these purposes. This report presents the Consolidated Standards of Reporting Trials (CONSORT) extension for randomised controlled trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE). The extension was developed to look at the unique characteristics of trials conducted with these types of data with the goal of improving reporting quality in the long term by setting standards early in the process of uptake of these trial designs. The extension was developed with a sequential approach, including a Delphi survey, a consensus meeting, and piloting of the checklist. The checklist was informed by the CONSORT 2010 statement and two reporting guidelines for observational studies, the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement and the REporting of studies Conducted using Observational Routinely collected Data (RECORD) statement. The extension includes eight items modified from the CONSORT 2010 statement and five new items. Reporting items with explanations and examples are provided, including key aspects of trials conducted using cohorts or routinely collected data that require specific reporting considerations.
ObjectivesTo develop effective interventions to prevent publishing in presumed predatory journals (ie, journals that display deceptive characteristics, markers or data that cannot be verified), it is ...helpful to understand the motivations and experiences of those who have published in these journals.DesignAn online survey delivered to two sets of corresponding authors containing demographic information, and questions about researchers' perceptions of publishing in the presumed predatory journal, type of article processing fees paid and the quality of peer review received. The survey also asked six open-ended items about researchers' motivations and experiences.ParticipantsUsing Beall’s lists, we identified two groups of individuals who had published empirical articles in biomedical journals that were presumed to be predatory.ResultsEighty-two authors partially responded (~14% response rate (11.4%44/386 from the initial sample, 19.3%38/197 from second sample) to our survey. The top three countries represented were India (n=21, 25.9%), USA (n=17, 21.0%) and Ethiopia (n=5, 6.2%). Three participants (3.9%) thought the journal they published in was predatory at the time of article submission. The majority of participants first encountered the journal via an email invitation to submit an article (n=32, 41.0%), or through an online search to find a journal with relevant scope (n=22, 28.2%). Most participants indicated their study received peer review (n=65, 83.3%) and that this was helpful and substantive (n=51, 79.7%). More than a third (n=32, 45.1%) indicated they did not pay fees to publish.ConclusionsThis work provides some evidence to inform policy to prevent future research from being published in predatory journals. Our research suggests that common views about predatory journals (eg, no peer review) may not always be true, and that a grey zone between legitimate and presumed predatory journals exists. These results are based on self-reports and may be biased thus limiting their interpretation.
AbstractObjectiveTo synthesise results of mental health outcomes in cohorts before and during the covid-19 pandemic.DesignSystematic review.Data sourcesMedline, PsycINFO, CINAHL, Embase, Web of ...Science, China National Knowledge Infrastructure, Wanfang, medRxiv, and Open Science Framework Preprints.Eligibility criteria for selecting studiesStudies comparing general mental health, anxiety symptoms, or depression symptoms assessed from 1 January 2020 or later with outcomes collected from 1 January 2018 to 31 December 2019 in any population, and comprising ≥90% of the same participants before and during the covid-19 pandemic or using statistical methods to account for missing data. Restricted maximum likelihood random effects meta-analyses (worse covid-19 outcomes representing positive change) were performed. Risk of bias was assessed using an adapted Joanna Briggs Institute Checklist for Prevalence Studies.ResultsAs of 11 April 2022, 94 411 unique titles and abstracts including 137 unique studies from 134 cohorts were reviewed. Most of the studies were from high income (n=105, 77%) or upper middle income (n=28, 20%) countries. Among general population studies, no changes were found for general mental health (standardised mean difference (SMD)change 0.11, 95% confidence interval −0.00 to 0.22) or anxiety symptoms (0.05, −0.04 to 0.13), but depression symptoms worsened minimally (0.12, 0.01 to 0.24). Among women or female participants, general mental health (0.22, 0.08 to 0.35), anxiety symptoms (0.20, 0.12 to 0.29), and depression symptoms (0.22, 0.05 to 0.40) worsened by minimal to small amounts. In 27 other analyses across outcome domains among subgroups other than women or female participants, five analyses suggested that symptoms worsened by minimal or small amounts, and two suggested minimal or small improvements. No other subgroup experienced changes across all outcome domains. In three studies with data from March to April 2020 and late 2020, symptoms were unchanged from pre-covid-19 levels at both assessments or increased initially then returned to pre-covid-19 levels. Substantial heterogeneity and risk of bias were present across analyses.ConclusionsHigh risk of bias in many studies and substantial heterogeneity suggest caution in interpreting results. Nonetheless, most symptom change estimates for general mental health, anxiety symptoms, and depression symptoms were close to zero and not statistically significant, and significant changes were of minimal to small magnitudes. Small negative changes occurred for women or female participants in all domains. The authors will update the results of this systematic review as more evidence accrues, with study results posted online (https://www.depressd.ca/covid-19-mental-health).Review registrationPROSPERO CRD42020179703.
Depression screening can improve upon usual care only if screening tools accurately identify depressed patients who would not otherwise be recognized by healthcare providers. Inclusion of patients ...already being treated for depression in studies of screening tool accuracy would inflate estimates of screening accuracy and yield. The present study investigated (1) the proportion of primary studies of depression screening tool accuracy that were recently published in journals listed in MEDLINE, which appropriately excluded currently diagnosed or treated patients; and (2) whether recently published meta-analyses identified the inclusion of currently diagnosed or treated patients as a potential source of bias.
MEDLINE was searched from January 1, 2013 through March 27, 2015 for primary studies and meta-analyses on depression screening tool accuracy.
Only 5 of 89 (5.6%) primary studies excluded currently diagnosed or treated patients from any analyses and only 3 (3.4%) from main analyses. In 3 studies that reported the number of patients excluded due to current treatment, the number of excluded patients was more than twice the number of newly identified depression cases. None of 5 meta-analyses identified the inclusion of currently diagnosed and treated patients as a potential source of bias.
The inclusion of currently diagnosed and treated patients in studies of depression screening tool accuracy is a problem that limits the applicability of research findings for actual clinical practice. Studies are needed that evaluate the diagnostic accuracy of depression screening tools among only untreated patients who would potentially be screened in practice.
The increase in the number of predatory journals puts scholarly communication at risk. In order to guard against publication in predatory journals, authors may use checklists to help detect predatory ...journals. We believe there are a large number of such checklists yet it is uncertain whether these checklists contain similar content. We conducted a systematic review to identify checklists that help to detect potential predatory journals and examined and compared their content and measurement properties.
We searched MEDLINE, Embase, PsycINFO, ERIC, Web of Science and Library, and Information Science & Technology Abstracts (January 2012 to November 2018); university library websites (January 2019); and YouTube (January 2019). We identified sources with original checklists used to detect potential predatory journals published in English, French or Portuguese. Checklists were defined as having instructions in point form, bullet form, tabular format or listed items. We excluded checklists or guidance on recognizing "legitimate" or "trustworthy" journals. To assess risk of bias, we adapted five questions from A Checklist for Checklists tool a priori as no formal assessment tool exists for the type of review conducted.
Of 1528 records screened, 93 met our inclusion criteria. The majority of included checklists to identify predatory journals were in English (n = 90, 97%), could be completed in fewer than five minutes (n = 68, 73%), included a mean of 11 items (range = 3 to 64) which were not weighted (n = 91, 98%), did not include qualitative guidance (n = 78, 84%), or quantitative guidance (n = 91, 98%), were not evidence-based (n = 90, 97%) and covered a mean of four of six thematic categories. Only three met our criteria for being evidence-based, i.e. scored three or more "yes" answers (low risk of bias) on the risk of bias tool.
There is a plethora of published checklists that may overwhelm authors looking to efficiently guard against publishing in predatory journals. The continued development of such checklists may be confusing and of limited benefit. The similarity in checklists could lead to the creation of one evidence-based tool serving authors from all disciplines.
Abstract
Background
Systematic reviews appraise and synthesize the results from a body of literature. In healthcare, systematic reviews are also used to develop clinical practice guidelines. An ...increasingly common concern among systematic reviews is that they may unknowingly capture studies published in “predatory” journals and that these studies will be included in summary estimates and impact results, guidelines, and ultimately, clinical care.
Findings
There is currently no agreed-upon guidance that exists for how best to manage articles from predatory journals that meet the inclusion criteria for a systematic review. We describe a set of actions that authors of systematic reviews can consider when handling articles published in predatory journals: (1) detail methods for addressing predatory journal articles a priori in a study protocol, (2) determine whether included studies are published in open access journals and if they are listed in the directory of open access journals, and (3) conduct a sensitivity analysis with predatory papers excluded from the synthesis.
Conclusion
Encountering eligible articles published in presumed predatory journals when conducting a review is an increasingly common threat. Developing appropriate methods to account for eligible research published in predatory journals is needed to decrease the potential negative impact of predatory journals on healthcare.