At present, the metrics of scientific success used by most universities - citations, publications and grant money - encourage a linear career path from postgraduate studies to a tenured position. The ...latest Research Excellence Framework assessment of universities in the United Kingdom, for example, provided hundreds of case studies, including of start-up companies and of policy that has informed research.
The pervasiveness of irreproducible research remains a thorny problem for the progress of scientific endeavor, spawning an abundance of opinion, investigation, and proposals for improvement. ...Irreproducible research has negative consequences beyond the obvious impact on achieving new scientific discoveries that can advance healthcare and enable new technologies. The conduct of science is resource intensive, resulting in a large environmental impact from even the smallest research programs. There is value in making explicit connections between the conduct of more rigorous, reproducible science and commitments to environmental sustainability. Shared research resources (also commonly known as cores) often have an institutional role in supporting researchers in the responsible conduct of research through training, informal mentorship, and services and are particularly well suited to promulgating essential principles of scientific rigor, reproducibility, and transparency. Shared research resources can also play a role in advancing sustainability by virtue of their inherently efficient science model in which singular shared equipment, technology, and expertise resources can serve many different research programs. Programs that elevate shared research resources, scientific rigor, reproducibility, transparency, and environment sustainability in harmony may achieve a unique synergy. Several case studies and quality paradigms are discussed that offer tools and concepts that can be adapted whole or in part by individual shared research resources or research-intensive institutions as part of an overall program of sustainability.
What drives academic data sharing? Fecher, Benedikt; Friesike, Sascha; Hebing, Marcel
PloS one,
02/2015, Volume:
10, Issue:
2
Journal Article
Peer reviewed
Open access
Despite widespread support from policy makers, funding agencies, and scientific journals, academic researchers rarely make their research data available to others. At the same time, data sharing in ...research is attributed a vast potential for scientific progress. It allows the reproducibility of study results and the reuse of old data for new research questions. Based on a systematic review of 98 scholarly papers and an empirical survey among 603 secondary data users, we develop a conceptual framework that explains the process of data sharing from the primary researcher's point of view. We show that this process can be divided into six descriptive categories: Data donor, research organization, research community, norms, data infrastructure, and data recipients. Drawing from our findings, we discuss theoretical implications regarding knowledge creation and dissemination as well as research policy measures to foster academic collaboration. We conclude that research data cannot be regarded as knowledge commons, but research policies that better incentivise data sharing are needed to improve the quality of research results and foster scientific progress.
High quality protocols facilitate proper conduct, reporting, and external review of clinical trials. However, the completeness of trial protocols is often inadequate. To help improve the content and ...quality of protocols, an international group of stakeholders developed the SPIRIT 2013 Statement (Standard Protocol Items: Recommendations for Interventional Trials). The SPIRIT Statement provides guidance in the form of a checklist of recommended items to include in a clinical trial protocol. This SPIRIT 2013 Explanation and Elaboration paper provides important information to promote full understanding of the checklist recommendations. For each checklist item, we provide a rationale and detailed description; a model example from an actual protocol; and relevant references supporting its importance. We strongly recommend that this explanatory paper be used in conjunction with the SPIRIT Statement. A website of resources is also available (www.spirit-statement.org). The SPIRIT 2013 Explanation and Elaboration paper, together with the Statement, should help with the drafting of trial protocols. Complete documentation of key trial elements can facilitate transparency and protocol review for the benefit of all stakeholders.
We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p ...hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines. The rates of QRPs found in this study are comparable with the rates seen in psychology, indicating that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.