There are reports of a high sensitivity of prostate cancer to radiotherapy dose fractionation, and this has prompted several trials of hypofractionation schedules. It remains unclear whether ...hypofractionation will provide a significant therapeutic benefit in the treatment of prostate cancer, and whether there are different fractionation sensitivities for different stages of disease. In order to address this, multiple primary datasets have been collected for analysis.
Seven datasets were assembled from institutions worldwide. A total of 5969 patients were treated using external beams with or without androgen deprivation (AD). Standard fractionation (1.8-2.0 Gy per fraction) was used for 40% of the patients, and hypofractionation (2.5-6.7 Gy per fraction) for the remainder. The overall treatment time ranged from 1 to 8 weeks. Low-risk patients comprised 23% of the total, intermediate-risk 44%, and high-risk 33%. Direct analysis of the primary data for tumor control at 5 years was undertaken, using the Phoenix criterion of biochemical relapse-free survival, in order to calculate values in the linear-quadratic equation of k (natural log of the effective target cell number), α (dose-response slope using very low doses per fraction), and the ratio α/β that characterizes dose-fractionation sensitivity.
There was no significant difference between the α/β value for the three risk groups, and the value of α/β for the pooled data was 1.4 (95% CI = 0.9-2.2) Gy. Androgen deprivation improved the bNED outcome index by about 5% for all risk groups, but did not affect the α/β value.
The overall α/β value was consistently low, unaffected by AD deprivation, and lower than the appropriate values for late normal-tissue morbidity. Hence the fractionation sensitivity differential (tumor/normal tissue) favors the use of hypofractionated radiotherapy schedules for all risk groups, which is also very beneficial logistically in limited-resource settings.
Full text
Available for:
GEOZS, IJS, NUK, OILJ, UL, UM, UPUK
Despite MN being one of the most common causes of nephrotic syndrome worldwide, its biological and environmental determinants are poorly understood in large-part due to it being a rare disease. ...Making use of the UK Biobank, a unique resource holding a clinical dataset and stored DNA, serum and urine for ~500,000 participants, this study aims to address this gap in understanding.
The primary outcome was putative MN as defined by ICD-10 codes occurring in the UK Biobank. Univariate relative risk regression modelling was used to assess the associations between the incidence of MN and related phenotypes with sociodemographic, environmental exposures, and previously described increased-risk SNPs.
502,507 patients were included in the study of whom 100 were found to have a putative diagnosis of MN; 36 at baseline and 64 during the follow-up. Prevalence at baseline and last follow-up were 72 and 199 cases/million respectively. At baseline, as expected, the majority of those previously diagnosed with MN had proteinuria, and there was already evidence of proteinuria in patients diagnosed within the first 5 years of follow-up. The highest incidence rate for MN in patients was seen in those homozygous for the high-risk alleles (9.9/100,000 person-years).
It is feasible to putatively identify patients with MN in the UK Biobank and cases are still accumulating. This study shows the chronicity of disease with proteinuria present years before diagnosis. Genetics plays an important role in disease pathogenesis, with the at-risk group providing a potential population for recall.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
The U.S. Food and Drug Administration (US FDA) has identified dietary exposure to heavy metals as a public health concern, focusing particularly on arsenic, cadmium, lead, and mercury. One way to ...determine current risk is to compare established safe exposure limits (reference values) with current population-based dietary background levels. Information on reference values and dietary background exposures for these metals and chromium were critically evaluated in support of an interactive risk assessment screening tool (Heavy Meals Screening Tool HMST). Cadmium, arsenic, and mercury background exposures from food and water were found to be below current safe US regulatory limits based on non-cancer effects, while lead background exposures were nearly equivalent to the US FDA's newest interim reference level for children. Because detections of chromium in foods are infrequent and data on speciation (trivalent versus hexavalent) are limited, chromium was excluded from the HMST. The focus of this work was to present U.S. based reference and background exposure values, although the tool can use inputs that may be more appropriate for other countries, cultures, and situations. With emerging science, new health endpoints, and changes in food consumption trends, both reference values and background exposure levels are likely to evolve.
•The Institute for the Advancement of Food and Nutritional Sciences has updated a heavy metals screening tool for foods.•The update included re-evaluated toxicity values and dietary exposure estimates used to estimate potential risks.•Exposures to cadmium, arsenic, and mercury were found to be below reference values based on non-cancer effects.•Because exposure to chromium in foods is minimal, chromium was removed from the screening tool.•Intake of lead from food and water is nearly equivalent to recent reference values set by the FDA for young children.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Identifying overlapping communities in networks is a challenging task. In this work we present a probabilistic approach to community detection that utilizes a Bayesian non-negative matrix ...factorization model to extract overlapping modules from a network. The scheme has the advantage of soft-partitioning solutions, assignment of node participation scores to modules, and an intuitive foundation. We present the performance of the method against a variety of benchmark problems and compare and contrast it to several other algorithms for community detection.
Full text
Available for:
CMK, CTK, FMFMET, IJS, NUK, PNG, UM
The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift ...distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak lensing, baryon acoustic oscillations, supernovae), trading off between completeness and reliability in the galaxy sample. The various sources of uncertainty in measurements of the photometry and redshifts put a lower bound on the accuracy that any model can hope to achieve. The intrinsic uncertainty associated with estimates is often non-uniform and input-dependent, commonly known in statistics as heteroscedastic noise. However, existing approaches are susceptible to outliers and do not take into account variance induced by non-uniform data density and in most cases require manual tuning of many parameters. In this paper, we present a Bayesian machine learning approach that jointly optimizes the model with respect to both the predictive mean and variance we refer to as Gaussian processes for photometric redshifts (GPz). The predictive variance of the model takes into account both the variance due to data density and photometric noise. Using the Sloan Digital Sky Survey (SDSS) DR12 data, we show that our approach substantially outperforms other machine learning methods for photo-z estimation and their associated variance, such as tpz and annz2. We provide a matlab and python implementations that are available to download at https://github.com/OxfordML/GPz.
To assess implementation of the Saving Babies Lives (SBL) Care Bundle, a collection of practice recommendations in four key areas, to reduce stillbirth in England.
A retrospective cohort study of ...463,630 births in 19 NHS Trusts in England using routinely collected electronic data supplemented with case note audit (n = 1,658), and surveys of service users (n = 2,085) and health care professionals (n = 1,064). The primary outcome was stillbirth rate. Outcome rates two years before and after the nominal SBL implementation date were derived as a measure of change over the implementation period. Data were collected on secondary outcomes and process outcomes which reflected implementation of the SBL care bundle.
The total stillbirth rate, declined from 4.2 to 3.4 per 1,000 births between the two time points (adjusted Relative Risk (aRR) 0.80, 95% Confidence Interval (95% CI) 0.70 to 0.91, P<0.001). There was a contemporaneous increase in induction of labour (aRR 1.20 (95%CI 1.18-1.21), p<0.001) and emergency Caesarean section (aRR 1.10 (95%CI 1.07-1.12), p<0.001). The number of ultrasound scans performed (aRR 1.25 (95%CI 1.21-1.28), p<0.001) and the proportion of small for gestational age infants detected (aRR 1.59 (95%CI 1.32-1.92), p<0.001) also increased. Organisations reporting higher levels of implementation had improvements in process measures in all elements of the care bundle. An economic analysis estimated the cost of implementing the care bundle at ~£140 per birth. However, neither the costs nor changes in outcomes could be definitively attributed to implementation of the SBL care bundle.
Implementation of the SBL care bundle increased over time in the majority of sites. Implementation was associated with improvements in process outcomes. The reduction in stillbirth rates in participating sites exceeded that reported nationally in the same timeframe. The intervention should be refined to identify women who are most likely to benefit and minimise unwarranted intervention.
The study was registered on (NCT03231007); www.clinicaltrials.gov.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
The global network of scientific collaboration created by researchers opens new opportunities for developing countries to engage in the process of knowledge creation historically lead by institutions ...in the developed world. The results discussed here explore how Cubans working in European science and technology might contribute to extending the scientific collaboration of the country through their ties with Cuban institutions mainly in the academic sector. A bibliometric method was used to explore the pattern of collaboration of Cuban researchers in Europe using the institutional affiliation of authors and collaborators. The records of scientific publications of the defined sample were obtained from
Scopus
database for the period between 1995 and 2014. The network of collaboration was generated using the affiliations of Cuban authors in Europe and co-authors with worldwide affiliations shown in the records of publications of each Cuban researcher of the study. The analysis of aggregate values of the output of Cuban researchers in Europe (1995–2014) reveals that their collaboration with Cuba correlates moderately with their performance in Europe. However, when taking into account their time publishing in Europe, the collaboration with Cuba decreases the longer they remain away from home. The network of collaborating Cuban researchers in Europe comprises 991 different affiliations from 58 countries: 698 from Europe, 118 from North America, 96 from Latin America and 79 from the rest of the world.
K
-core analysis of centrality shows two Cuban universities sharing the central position with another 24 institutions worldwide of which 18 belong to higher education.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
This tutorial describes the mean-field variational Bayesian approximation to inference in graphical models, using modern machine learning terminology rather than statistical physics concepts. It ...begins by seeking to find an approximate mean-field distribution close to the target joint in the KL-divergence sense. It then derives local node updates and reviews the recent Variational Message Passing framework.
Full text
Available for:
CEKLJ, EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ