Data from studies in nonhuman primates suggest that the triple monoclonal antibody cocktail ZMapp is a promising immune-based treatment for Ebola virus disease (EVD).
Beginning in March 2015, we ...conducted a randomized, controlled trial of ZMapp plus the current standard of care as compared with the current standard of care alone in patients with EVD that was diagnosed in West Africa by polymerase-chain-reaction (PCR) assay. Eligible patients of any age were randomly assigned in a 1:1 ratio to receive either the current standard of care or the current standard of care plus three intravenous infusions of ZMapp (50 mg per kilogram of body weight, administered every third day). Patients were stratified according to baseline PCR cycle-threshold value for the virus (≤22 vs. >22) and country of enrollment. Oral favipiravir was part of the current standard of care in Guinea. The primary end point was mortality at 28 days.
A total of 72 patients were enrolled at sites in Liberia, Sierra Leone, Guinea, and the United States. Of the 71 patients who could be evaluated, 21 died, representing an overall case fatality rate of 30%. Death occurred in 13 of 35 patients (37%) who received the current standard of care alone and in 8 of 36 patients (22%) who received the current standard of care plus ZMapp. The observed posterior probability that ZMapp plus the current standard of care was superior to the current standard of care alone was 91.2%, falling short of the prespecified threshold of 97.5%. Frequentist analyses yielded similar results (absolute difference in mortality with ZMapp, -15 percentage points; 95% confidence interval, -36 to 7). Baseline viral load was strongly predictive of both mortality and duration of hospitalization in all age groups.
In this randomized, controlled trial of a putative therapeutic agent for EVD, although the estimated effect of ZMapp appeared to be beneficial, the result did not meet the prespecified statistical threshold for efficacy. (Funded by the National Institute of Allergy and Infectious Diseases and others; PREVAIL II ClinicalTrials.gov number, NCT02363322 .).
Benkeser et al. present a very informative paper evaluating the efficiency gains of covariate adjustment in settings with binary, ordinal, and time‐to‐event outcomes. The adjustment method focuses on ...estimating the marginal treatment effect averaged over the covariate distribution in both arms combined. The authors show that covariate adjustment can achieve power gains that could find answers more quickly. The suggested approach is an important weapon in the armamentarium against epidemics like COVID‐19. I recommend evaluating the procedure against more traditional approaches for conditional analyses (e.g., logistic regression) and against blinded methods of building prediction models followed by randomization‐based inference.
As randomization methods use more information in more complex ways to assign patients to treatments, analysis of the resulting data becomes challenging. The treatment assignment vector and outcome ...vector become correlated whenever randomization probabilities depend on data correlated with outcomes. One straightforward analysis method is a re‐randomization test that fixes outcome data and creates a reference distribution for the test statistic by repeatedly re‐randomizing according to the same randomization method used in the trial. This article reviews re‐randomization tests, especially in nonstandard settings like covariate‐adaptive and response‐adaptive randomization. We show that re‐randomization tests provide valid inference in a wide range of settings. Nonetheless, there are simple examples demonstrating limitations.
The first table in many articles reporting results of a randomized clinical trial compares baseline factors across arms. Results that appear inconsistent with chance trigger suspicion, and in one ...case, accusation and confirmation of data falsification. We confirm theoretically results of simulation analyses showing that inconsistency with chance is extremely difficult to prove in the absence of any information about correlations between baseline covariates. We offer a reasonable diagnostic to trigger further investigation.
In a mathematical approach to hypothesis tests, we start with a clearly defined set of hypotheses and choose the test with the best properties for those hypotheses. In practice, we often start with ...less precise hypotheses. For example, often a researcher wants to know which of two groups generally has the larger responses, and either a t-test or a Wilcoxon-Mann-Whitney (WMW) test could be acceptable. Although both t-tests and WMW tests are usually associated with quite different hypotheses, the decision rule and p-value from either test could be associated with many different sets of assumptions, which we call perspectives. It is useful to have many of the different perspectives to which a decision rule may be applied collected in one place, since each perspective allows a different interpretation of the associated p-value. Here we collect many such perspectives for the two-sample t-test, the WMW test and other related tests. We discuss validity and consistency under each perspective and discuss recommendations between the tests in light of these many different perspectives. Finally, we briefly discuss a decision rule for testing genetic neutrality where knowledge of the many perspectives is vital to the proper interpretation of the decision rule.
Antiretroviral therapy is highly effective in suppressing human immunodeficiency virus (HIV)
. However, eradication of the virus in individuals with HIV has not been possible to date
. Given that HIV ...suppression requires life-long antiretroviral therapy, predominantly on a daily basis, there is a need to develop clinically effective alternatives that use long-acting antiviral agents to inhibit viral replication
. Here we report the results of a two-component clinical trial involving the passive transfer of two HIV-specific broadly neutralizing monoclonal antibodies, 3BNC117 and 10-1074. The first component was a randomized, double-blind, placebo-controlled trial that enrolled participants who initiated antiretroviral therapy during the acute/early phase of HIV infection. The second component was an open-label single-arm trial that enrolled individuals with viraemic control who were naive to antiretroviral therapy. Up to 8 infusions of 3BNC117 and 10-1074, administered over a period of 24 weeks, were well tolerated without any serious adverse events related to the infusions. Compared with the placebo, the combination broadly neutralizing monoclonal antibodies maintained complete suppression of plasma viraemia (for up to 43 weeks) after analytical treatment interruption, provided that no antibody-resistant HIV was detected at the baseline in the study participants. Similarly, potent HIV suppression was seen in the antiretroviral-therapy-naive study participants with viraemia carrying sensitive virus at the baseline. Our data demonstrate that combination therapy with broadly neutralizing monoclonal antibodies can provide long-term virological suppression without antiretroviral therapy in individuals with HIV, and our experience offers guidance for future clinical trials involving next-generation antibodies with long half-lives.
Hung et al. (2007) considered the problem of controlling the type I error rate for a primary and secondary endpoint in a clinical trial using a gatekeeping approach in which the secondary endpoint is ...tested only if the primary endpoint crosses its monitoring boundary. They considered a two‐look trial and showed by simulation that the naive method of testing the secondary endpoint at full level α at the time the primary endpoint reaches statistical significance does not control the familywise error rate at level α. Tamhane et al. (2010) derived analytic expressions for familywise error rate and power and confirmed the inflated error rate of the naive approach. Nonetheless, many people mistakenly believe that the closure principle can be used to prove that the naive procedure controls the familywise error rate. The purpose of this note is to explain in greater detail why there is a problem with the naive approach and show that the degree of alpha inflation can be as high as that of unadjusted monitoring of a single endpoint.
Designing clinical trials for emerging infectious diseases such as COVID-19 is challenging because information needed for proper planning may be lacking. Pre-specified adaptive designs can be ...attractive options, but what happens if a trial with no such design needs to be modified? For example, unexpectedly high efficacy (approximately 95%) in two COVID-19 vaccine trials might cause investigators in other COVID-19 vaccine trials to increase the number of interim analyses to allow earlier stopping for efficacy. If such a decision is based solely on external data, there are no issues, but what if internal trial data by arm are also examined? Fortunately, the conditional error principle of Müller and Schäfer (2004) can be used to ensure no inflation of the type 1 error rate, even if no interim analyses were planned. We study the properties, including limitations, of this method. We provide a shiny app to evaluate changes in timing of interim analyses in response to outcome data by arm in clinical trials.
Multiple comparison adjustments have a long history, yet confusion remains about which procedures control type 1 error rate in a strong sense and how to show this. Part of the confusion stems from a ...powerful technique called the closed testing principle, whose statement is deceptively simple, but is sometimes misinterpreted. This primer presents a straightforward way to think about multiplicity adjustment.