Grauer's gorilla (Gorilla beringei graueri), the World's largest primate, is confined to eastern Democratic Republic of Congo (DRC) and is threatened by civil war and insecurity. During the war, ...armed groups in mining camps relied on hunting bushmeat, including gorillas. Insecurity and the presence of several militia groups across Grauer's gorilla's range made it very difficult to assess their population size. Here we use a novel method that enables rigorous assessment of local community and ranger-collected data on gorilla occupancy to evaluate the impacts of civil war on Grauer's gorilla, which prior to the war was estimated to number 16,900 individuals. We show that gorilla numbers in their stronghold of Kahuzi-Biega National Park have declined by 87%. Encounter rate data of gorilla nests at 10 sites across its range indicate declines of 82-100% at six of these sites. Spatial occupancy analysis identifies three key areas as the most critical sites for the remaining populations of this ape and that the range of this taxon is around 19,700 km2. We estimate that only 3,800 Grauer's gorillas remain in the wild, a 77% decline in one generation, justifying its elevation to Critically Endangered status on the IUCN Red List of Threatened Species.
AbstractObjectiveTo derive and validate a risk prediction algorithm to estimate hospital admission and mortality outcomes from coronavirus disease 2019 (covid-19) in adults.DesignPopulation based ...cohort study.Setting and participantsQResearch database, comprising 1205 general practices in England with linkage to covid-19 test results, Hospital Episode Statistics, and death registry data. 6.08 million adults aged 19-100 years were included in the derivation dataset and 2.17 million in the validation dataset. The derivation and first validation cohort period was 24 January 2020 to 30 April 2020. The second temporal validation cohort covered the period 1 May 2020 to 30 June 2020.Main outcome measuresThe primary outcome was time to death from covid-19, defined as death due to confirmed or suspected covid-19 as per the death certification or death occurring in a person with confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in the period 24 January to 30 April 2020. The secondary outcome was time to hospital admission with confirmed SARS-CoV-2 infection. Models were fitted in the derivation cohort to derive risk equations using a range of predictor variables. Performance, including measures of discrimination and calibration, was evaluated in each validation time period.Results4384 deaths from covid-19 occurred in the derivation cohort during follow-up and 1722 in the first validation cohort period and 621 in the second validation cohort period. The final risk algorithms included age, ethnicity, deprivation, body mass index, and a range of comorbidities. The algorithm had good calibration in the first validation cohort. For deaths from covid-19 in men, it explained 73.1% (95% confidence interval 71.9% to 74.3%) of the variation in time to death (R2); the D statistic was 3.37 (95% confidence interval 3.27 to 3.47), and Harrell’s C was 0.928 (0.919 to 0.938). Similar results were obtained for women, for both outcomes, and in both time periods. In the top 5% of patients with the highest predicted risks of death, the sensitivity for identifying deaths within 97 days was 75.7%. People in the top 20% of predicted risk of death accounted for 94% of all deaths from covid-19.ConclusionThe QCOVID population based risk algorithm performed well, showing very high levels of discrimination for deaths and hospital admissions due to covid-19. The absolute risks presented, however, will change over time in line with the prevailing SARS-C0V-2 infection rate and the extent of social distancing measures in place, so they should be interpreted with caution. The model can be recalibrated for different time periods, however, and has the potential to be dynamically updated as the pandemic evolves.
CONTEXT Improving vitamin D status may be an important modifiable risk factor to reduce falls and fractures; however, adherence to daily supplementation is typically poor. OBJECTIVE To determine ...whether a single annual dose of 500 000 IU of cholecalciferol administered orally to older women in autumn or winter would improve adherence and reduce the risk of falls and fracture. DESIGN, SETTING, AND PARTICIPANTS A double-blind, placebo-controlled trial of 2256 community-dwelling women, aged 70 years or older, considered to be at high risk of fracture were recruited from June 2003 to June 2005 and were randomly assigned to receive cholecalciferol or placebo each autumn to winter for 3 to 5 years. The study concluded in 2008. INTERVENTION 500 000 IU of cholecalciferol or placebo. MAIN OUTCOME MEASURES Falls and fractures were ascertained using monthly calendars; details were confirmed by telephone interview. Fractures were radiologically confirmed. In a substudy, 137 randomly selected participants underwent serial blood sampling for 25-hydroxycholecalciferol and parathyroid hormone levels. RESULTS Women in the cholecalciferol (vitamin D) group had 171 fractures vs 135 in the placebo group; 837 women in the vitamin D group fell 2892 times (rate, 83.4 per 100 person-years) while 769 women in the placebo group fell 2512 times (rate, 72.7 per 100 person-years; incidence rate ratio RR, 1.15; 95% confidence interval CI, 1.02-1.30; P = .03). The incidence RR for fracture in the vitamin D group was 1.26 (95% CI, 1.00-1.59; P = .047) vs the placebo group (rates per 100 person-years, 4.9 vitamin D vs 3.9 placebo). A temporal pattern was observed in a post hoc analysis of falls. The incidence RR of falling in the vitamin D group vs the placebo group was 1.31 in the first 3 months after dosing and 1.13 during the following 9 months (test for homogeneity; P = .02). In the substudy, the median baseline serum 25-hydroxycholecalciferol was 49 nmol/L. Less than 3% of the substudy participants had 25-hydroxycholecalciferol levels lower than 25 nmol/L. In the vitamin D group, 25-hydroxycholecalciferol levels increased at 1 month after dosing to approximately 120 nmol/L, were approximately 90 nmol/L at 3 months, and remained higher than the placebo group 12 months after dosing. CONCLUSION Among older community-dwelling women, annual oral administration of high-dose cholecalciferol resulted in an increased risk of falls and fractures. TRIAL REGISTRATION anzctr.org.au Identifier: ACTRN12605000658617; isrctn.org Identifier: ISRCTN83409867
Summary
Objective
Men with congenital adrenal hyperplasia (CAH) have impaired fertility. We aimed to assess fertility outcomes and the importance of hypogonadotropic hypogonadism, testicular failure ...and the presence of testicular adrenal rest tumours (TART).
Design
Retrospective analysis of men attending an adult CAH clinic in a tertiary centre.
Patients
Fifty men with CAH due to 21 hydroxylase deficiency were identified of whom 35 were salt wasting and 15 were non‐salt‐wasting.
Measurements
Review of fertility history and parameters including luteinizing hormone (LH), follicle‐stimulating hormone (FSH), androstenedione, 17‐hydroxyprogesterone (17‐OHP), semen analysis and the presence of testicular adrenal rest tissue (TART) on ultrasound.
Results
TART were detected by ultrasound in 21 (47%), and their presence was associated with an elevated FSH (P = 0·01). Severe oligospermia was present in 11 of 23 (48%), and this was associated with an elevated FSH (P = 0·02), suppressed LH (P < 0·01) and TART (P = 0·03) when compared to those with a sperm count >5 × 106 per ml. Of those that desired fertility, 10 of 17 (59%) required treatment intensification and four underwent in vitro fertilization. Intensification resulted in a rise in median LH (0·6–4·3 IU/l; P = 0·01). Live birth rate was 15 of 17 (88%) with a median (range) time to conception of 8 (0–38) months.
Conclusions
Suppressed LH is a marker for subfertility and is often reversible. Testicular failure is closely associated with TART formation. If TART are detected, sperm cryopreservation should be offered given the risk of progression to irreversible testicular failure. Male fertility in CAH can be improved by intensified treatment and assisted reproductive technology.
The study of traditional knowledge of medicinal plants has led to discoveries that have helped combat diseases and improve healthcare. However, the development of quantitative measures that can ...assist our quest for new medicinal plants has not greatly advanced in recent years. Phylogenetic tools have entered many scientific fields in the last two decades to provide explanatory power, but have been overlooked in ethnomedicinal studies. Several studies show that medicinal properties are not randomly distributed in plant phylogenies, suggesting that phylogeny shapes ethnobotanical use. Nevertheless, empirical studies that explicitly combine ethnobotanical and phylogenetic information are scarce.
In this study, we borrowed tools from community ecology phylogenetics to quantify significance of phylogenetic signal in medicinal properties in plants and identify nodes on phylogenies with high bioscreening potential. To do this, we produced an ethnomedicinal review from extensive literature research and a multi-locus phylogenetic hypothesis for the pantropical genus Pterocarpus (Leguminosae: Papilionoideae). We demonstrate that species used to treat a certain conditions, such as malaria, are significantly phylogenetically clumped and we highlight nodes in the phylogeny that are significantly overabundant in species used to treat certain conditions. These cross-cultural patterns in ethnomedicinal usage in Pterocarpus are interpreted in the light of phylogenetic relationships.
This study provides techniques that enable the application of phylogenies in bioscreening, but also sheds light on the processes that shape cross-cultural ethnomedicinal patterns. This community phylogenetic approach demonstrates that similar ethnobotanical uses can arise in parallel in different areas where related plants are available. With a vast amount of ethnomedicinal and phylogenetic information available, we predict that this field, after further refinement of the techniques, will expand into similar research areas, such as pest management or the search for bioactive plant-based compounds.
COVID-19 has disproportionately affected minority ethnic populations in the UK. Our aim was to quantify ethnic differences in SARS-CoV-2 infection and COVID-19 outcomes during the first and second ...waves of the COVID-19 pandemic in England.
We conducted an observational cohort study of adults (aged ≥18 years) registered with primary care practices in England for whom electronic health records were available through the OpenSAFELY platform, and who had at least 1 year of continuous registration at the start of each study period (Feb 1 to Aug 3, 2020 wave 1, and Sept 1 to Dec 31, 2020 wave 2). Individual-level primary care data were linked to data from other sources on the outcomes of interest: SARS-CoV-2 testing and positive test results and COVID-19-related hospital admissions, intensive care unit (ICU) admissions, and death. The exposure was self-reported ethnicity as captured on the primary care record, grouped into five high-level census categories (White, South Asian, Black, other, and mixed) and 16 subcategories across these five categories, as well as an unknown ethnicity category. We used multivariable Cox regression to examine ethnic differences in the outcomes of interest. Models were adjusted for age, sex, deprivation, clinical factors and comorbidities, and household size, with stratification by geographical region.
Of 17 288 532 adults included in the study (excluding care home residents), 10 877 978 (62·9%) were White, 1 025 319 (5·9%) were South Asian, 340 912 (2·0%) were Black, 170 484 (1·0%) were of mixed ethnicity, 320 788 (1·9%) were of other ethnicity, and 4 553 051 (26·3%) were of unknown ethnicity. In wave 1, the likelihood of being tested for SARS-CoV-2 infection was slightly higher in the South Asian group (adjusted hazard ratio 1·08 95% CI 1·07–1·09), Black group (1·08 1·06–1·09), and mixed ethnicity group (1·04 1·02–1·05) and was decreased in the other ethnicity group (0·77 0·76–0·78) relative to the White group. The risk of testing positive for SARS-CoV-2 infection was higher in the South Asian group (1·99 1·94–2·04), Black group (1·69 1·62–1·77), mixed ethnicity group (1·49 1·39–1·59), and other ethnicity group (1·20 1·14–1·28). Compared with the White group, the four remaining high-level ethnic groups had an increased risk of COVID-19-related hospitalisation (South Asian group 1·48 1·41–1·55, Black group 1·78 1·67–1·90, mixed ethnicity group 1·63 1·45–1·83, other ethnicity group 1·54 1·41–1·69), COVID-19-related ICU admission (2·18 1·92–2·48, 3·12 2·65–3·67, 2·96 2·26–3·87, 3·18 2·58–3·93), and death (1·26 1·15–1·37, 1·51 1·31–1·71, 1·41 1·11–1·81, 1·22 1·00–1·48). In wave 2, the risks of hospitalisation, ICU admission, and death relative to the White group were increased in the South Asian group but attenuated for the Black group compared with these risks in wave 1. Disaggregation into 16 ethnicity groups showed important heterogeneity within the five broader categories.
Some minority ethnic populations in England have excess risks of testing positive for SARS-CoV-2 and of adverse COVID-19 outcomes compared with the White population, even after accounting for differences in sociodemographic, clinical, and household characteristics. Causes are likely to be multifactorial, and delineating the exact mechanisms is crucial. Tackling ethnic inequalities will require action across many fronts, including reducing structural inequalities, addressing barriers to equitable care, and improving uptake of testing and vaccination.
Medical Research Council.
Replication fork stalling and collapse is a major source of genome instability leading to neoplastic transformation or cell death. Such stressed replication forks can be conservatively repaired and ...restarted using homologous recombination (HR) or non-conservatively repaired using micro-homology mediated end joining (MMEJ). HR repair of stressed forks is initiated by 5' end resection near the fork junction, which permits 3' single strand invasion of a homologous template for fork restart. This 5' end resection also prevents classical non-homologous end-joining (cNHEJ), a competing pathway for DNA double-strand break (DSB) repair. Unopposed NHEJ can cause genome instability during replication stress by abnormally fusing free double strand ends that occur as unstable replication fork repair intermediates. We show here that the previously uncharacterized Exonuclease/Endonuclease/Phosphatase Domain-1 (EEPD1) protein is required for initiating repair and restart of stalled forks. EEPD1 is recruited to stalled forks, enhances 5' DNA end resection, and promotes restart of stalled forks. Interestingly, EEPD1 directs DSB repair away from cNHEJ, and also away from MMEJ, which requires limited end resection for initiation. EEPD1 is also required for proper ATR and CHK1 phosphorylation, and formation of gamma-H2AX, RAD51 and phospho-RPA32 foci. Consistent with a direct role in stalled replication fork cleavage, EEPD1 is a 5' overhang nuclease in an obligate complex with the end resection nuclease Exo1 and BLM. EEPD1 depletion causes nuclear and cytogenetic defects, which are made worse by replication stress. Depleting 53BP1, which slows cNHEJ, fully rescues the nuclear and cytogenetic abnormalities seen with EEPD1 depletion. These data demonstrate that genome stability during replication stress is maintained by EEPD1, which initiates HR and inhibits cNHEJ and MMEJ.
Estimation of the effect of a binary exposure on an outcome in the presence of confounding is often carried out via outcome regression modelling. An alternative approach is to use propensity score ...methodology. The propensity score is the conditional probability of receiving the exposure given the observed covariates and can be used, under the assumption of no unmeasured confounders, to estimate the causal effect of the exposure. In this article, we provide a non-technical and intuitive discussion of propensity score methodology, motivating the use of the propensity score approach by analogy with randomised studies, and describe the four main ways in which this methodology can be implemented. We carefully describe the population parameters being estimated — an issue that is frequently overlooked in the medical literature. We illustrate these four methods using data from a study investigating the association between maternal choice to provide breast milk and the infant's subsequent neurodevelopment. We outline useful extensions of propensity score methodology and discuss directions for future research. Propensity score methods remain controversial and there is no consensus as to when, if ever, they should be used in place of traditional outcome regression models. We therefore end with a discussion of the relative advantages and disadvantages of each.
It has long been advised to account for baseline covariates in the analysis of confirmatory randomised trials, with the main statistical justifications being that this increases power and, when a ...randomisation scheme balanced covariates, permits a valid estimate of experimental error. There are various methods available to account for covariates but it is not clear how to choose among them.
Taking the perspective of writing a statistical analysis plan, we consider how to choose between the three most promising broad approaches: direct adjustment, standardisation and inverse-probability-of-treatment weighting.
The three approaches are similar in being asymptotically efficient, in losing efficiency with mis-specified covariate functions and in handling designed balance. If a marginal estimand is targeted (for example, a risk difference or survival difference), then direct adjustment should be avoided because it involves fitting non-standard models that are subject to convergence issues. Convergence is most likely with IPTW. Robust standard errors used by IPTW are anti-conservative at small sample sizes. All approaches can use similar methods to handle missing covariate data. With missing outcome data, each method has its own way to estimate a treatment effect in the all-randomised population. We illustrate some issues in a reanalysis of GetTested, a randomised trial designed to assess the effectiveness of an electonic sexually transmitted infection testing and results service.
No single approach is always best: the choice will depend on the trial context. We encourage trialists to consider all three methods more routinely.
Summary Background Tuberculosis elimination in countries with a low incidence of the disease necessitates multiple interventions, including innovations in migrant screening. We examined a cohort of ...migrants screened for tuberculosis before entry to England, Wales, and Northern Ireland and tracked the development of disease in this group after arrival. Methods As part of a pilot pre-entry screening programme for tuberculosis in 15 countries with a high incidence of the disease, the International Organization for Migration screened all applicants for UK visas aged 11 years or older who intended to stay for more than 6 months. Applicants underwent a chest radiograph, and any with results suggestive of tuberculosis underwent sputum testing and culture testing (when available). We tracked the development of tuberculosis in those who tested negative for the disease and subsequently migrated to England, Wales, and Northern Ireland with the Enhanced Tuberculosis Surveillance system. Primary outcomes were cases of all forms of tuberculosis (including clinically diagnosed cases), and bacteriologically confirmed pulmonary tuberculosis. Findings Our study cohort was 519 955 migrants who were screened for tuberculosis before entry to the UK between Jan 1, 2006, and Dec 31, 2012. Cases notified on the Enhanced Tuberculosis Surveillance system between Jan 1, 2006, and Dec 31, 2013, were included. 1873 incident cases of all forms of tuberculosis were identified, and, on the basis of data for England, Wales, and Northern Ireland, the estimated incidence of all forms of tuberculosis in migrants screened before entry was 147 per 100 000 person-years (95% CI 140–154). The estimated incidence of bacteriologically confirmed pulmonary tuberculosis in migrants screened before entry was 49 per 100 000 person-years (95% CI 45–53). Migrants whose chest radiographs were compatible with active tuberculosis but with negative pre-entry microbiological results were at increased risk of tuberculosis compared with those with no radiographic abnormalities (incidence rate ratio 3·2, 95% CI 2·8–3·7; p<0·0001). Incidence of tuberculosis after migration increased significantly with increasing WHO-estimated prevalence of tuberculosis in migrants' countries of origin. 35 of 318 983 pre-entry screened migrants included in a secondary analysis with typing data were assumed index cases. Estimates of the rate of assumed reactivation tuberculosis ranged from 46 (95% CI 42–52) to 91 (82–102) per 100 000 population. Interpretation Migrants from countries with a high incidence of tuberculosis screened before being granted entry to low-incidence countries pose a negligible risk of onward transmission but are at increased risk of tuberculosis, which could potentially be prevented through identification and treatment of latent infection in close collaboration with a pre-entry screening programme. Funding Wellcome Trust, UK National Institute for Health Research, UK Medical Research Council, Public Health England, and Department of Health Policy Research Programme.