This paper examines whether the economic crisis induced by the COVID-19 pandemic exhibits a Schumpeterian “cleansing” of less productive firms. Using firm-level data collected for 34 economies up to ...18 months into the crisis, the study finds that less productive firms have a higher probability of permanently closing during the crisis, suggesting that the process of cleansing out unproductive activities is occurring. The paper also uncovers strong and negative relationships of firm exit with digital presence and with innovation. These relationships are driven by small firms. The study further finds that a burdensome business environment increases the probability of firm exit, also driven by small firms, and that a negative relationship exists between firm exit and age. Finally, evidence shows that the cleansing process is disrupted in countries which have introduced policies imposing a moratorium on insolvency procedures.
Plain English Summary
The purpose of this analysis is to investigate whether firms that are more productive are less likely to cease operation during the economic crisis induced by the COVID-19 pandemic. To verify this hypothesis, the paper uses data on firm characteristics, productivity, and status of operation from 34 countries. The data on firm characteristics and productivity were collected before the crisis, while data on the operating status were collected within 18 months since the appearance of the coronavirus. The results of the paper show that indeed, more productive firms are more likely to survive the crisis. In addition, businesses that have been in operation for longer, or ones which have a website or have introduced a new product in the years before the crisis are more likely to continue existing. The positive role of digitalization and innovation is true especially for small firms. Conversely, those businesses which have to spend more time in compliance with government regulations are less likely to survive. The policy implications show the importance of digitalization and innovation, the vulnerabilities of small firms, and the significance of good governance.
The reproducibility of published research has become an important topic in science policy. A number of large-scale replication projects have been conducted to gauge the overall reproducibility in ...specific academic fields. Here, we present an analysis of data from four studies which sought to forecast the outcomes of replication projects in the social and behavioural sciences, using human experts who participated in prediction markets and answered surveys. Because the number of findings replicated and predicted in each individual study was small, pooling the data offers an opportunity to evaluate hypotheses regarding the performance of prediction markets and surveys at a higher power. In total, peer beliefs were elicited for the replication outcomes of 103 published findings. We find there is information within the scientific community about the replicability of scientific findings, and that both surveys and prediction markets can be used to elicit and aggregate this information. Our results show prediction markets can determine the outcomes of direct replications with 73% accuracy (n = 103). Both the prediction market prices, and the average survey responses are correlated with outcomes (0.581 and 0.564 respectively, both p < .001). We also found a significant relationship between p-values of the original findings and replication outcomes. The dataset is made available through the R package "pooledmaRket" and can be used to further study community beliefs towards replications outcomes as elicited in the surveys and prediction markets.
•Psychologists participated in prediction markets to predict replication outcomes.•Prediction markets correctly predicted 75% of the replication outcomes.•Prediction markets performed better than ...survey data in predicting replication outcomes.•Survey data performed better in predicting relative effect size of the replications.
Understanding and improving reproducibility is crucial for scientific progress. Prediction markets and related methods of eliciting peer beliefs are promising tools to predict replication outcomes. We invited researchers in the field of psychology to judge the replicability of 24 studies replicated in the large scale Many Labs 2 project. We elicited peer beliefs in prediction markets and surveys about two replication success metrics: the probability that the replication yields a statistically significant effect in the original direction (p < 0.001), and the relative effect size of the replication. The prediction markets correctly predicted 75% of the replication outcomes, and were highly correlated with the replication outcomes. Survey beliefs were also significantly correlated with replication outcomes, but had larger prediction errors. The prediction markets for relative effect sizes attracted little trading and thus did not work well. The survey beliefs about relative effect sizes performed better and were significantly correlated with observed relative effect sizes. The results suggest that replication outcomes can be predicted and that the elicitation of peer beliefs can increase our knowledge about scientific reproducibility and the dynamics of hypothesis testing.
The present investigation provides the first systematic empirical tests for the role of politics in academic research. In a large sample of scientific abstracts from the field of social psychology, ...we find both evaluative differences, such that conservatives are described more negatively than liberals, and explanatory differences, such that conservatism is more likely to be the focus of explanation than liberalism. In light of the ongoing debate about politicized science, a forecasting survey permitted scientists to state a priori empirical predictions about the results, and then change their beliefs in light of the evidence. Participating scientists accurately predicted the direction of both the evaluative and explanatory differences, but at the same time significantly overestimated both effect sizes. Scientists also updated their broader beliefs about political bias in response to the empirical results, providing a model for addressing divisive scientific controversies across fields.
•In scientific abstracts, conservatives are described more negatively than liberals.•In research abstracts, conservatism is also more often explained than liberalism.•In a forecasting survey, scientists overestimated both effects.•Forecasters updated their beliefs about politics in science in light of the results.
The Defense Advanced Research Projects Agency (DARPA) programme 'Systematizing Confidence in Open Research and Evidence' (SCORE) aims to generate confidence scores for a large number of research ...claims from empirical studies in the social and behavioural sciences. The confidence scores will provide a quantitative assessment of how likely a claim will hold up in an independent replication. To create the scores, we follow earlier approaches and use prediction markets and surveys to forecast replication outcomes. Based on an initial set of forecasts for the overall replication rate in SCORE and its dependence on the academic discipline and the time of publication, we show that participants expect replication rates to increase over time. Moreover, they expect replication rates to differ between fields, with the highest replication rate in economics (average survey response 58%), and the lowest in psychology and in education (average survey response of 42% for both fields). These results reveal insights into the academic community's views of the replication crisis, including for research fields for which no large-scale replication studies have been undertaken yet.
There is evidence that prediction markets are useful tools to aggregate information on researchers' beliefs about scientific results including the outcome of replications. In this study, we use ...prediction markets to forecast the results of novel experimental designs that test established theories. We set up prediction markets for hypotheses tested in the Defense Advanced Research Projects Agency's (DARPA) Next Generation Social Science (NGS2) programme. Researchers were invited to bet on whether 22 hypotheses would be supported or not. We define support as a test result in the same direction as hypothesized, with a Bayes factor of at least 10 (i.e. a likelihood of the observed data being consistent with the tested hypothesis that is at least 10 times greater compared with the null hypothesis). In addition to betting on this binary outcome, we asked participants to bet on the expected effect size (in Cohen's
d
) for each hypothesis. Our goal was to recruit at least 50 participants that signed up to participate in these markets. While this was the case, only 39 participants ended up actually trading. Participants also completed a survey on both the binary result and the effect size. We find that neither prediction markets nor surveys performed well in predicting outcomes for NGS2.
Abstract
The impact of the firm's pre‐pandemic financial condition on the likelihood of a decline in its sales due to the COVID‐19 pandemic in 35 developing and emerging countries is estimated. ...Results show that better access to finance reduces the likelihood of a decline in sales. Access to finance is more effective in arresting sales declines when firms fear that production cuts may lead to the loss of skilled workers and hard‐to‐replace input suppliers. It is less effective when workers, like women, do not wish to continue working for health or family reasons. Important policy implications are discussed.
To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original ...research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = −0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.
Public Significance Statement
Research in the social sciences often has implications for public policies as well as individual decisions-for good reason, the robustness of research findings is therefore of widespread interest both inside and outside academia. Yet, even findings that directly replicate-emerging again when the same methodology is repeated-may not always prove conceptually robust to different methodological approaches. The present initiative suggests that crowdsourcing study designs using many research teams can help reveal the conceptual robustness of the effects, better informing the public about the state of the empirical evidence.
Creative destruction in science Tierney, Warren; Hardy, Jay H.; Ebersole, Charles R. ...
Organizational behavior and human decision processes,
11/2020, Volume:
161
Journal Article
Peer reviewed
Open access
•The creative destruction approach combines theory pruning and open science methods.•New measures, conditions, and populations are included in replication designs.•The goal is to make replication ...more generative and engage in theory building.•Recent studies examined work morality, biased reasoning, and gender discrimination.
Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions.
It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void— reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building.
The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.