The presence of oxygen in tumours has substantial impact on treatment outcome; relative to anoxic regions, well-oxygenated cells respond better to radiotherapy by a factor 2.5-3. This increased ...radio-response is known as the oxygen enhancement ratio. The oxygen effect is most commonly explained by the oxygen fixation hypothesis, which postulates that radical-induced DNA damage can be permanently 'fixed' by molecular oxygen, rendering DNA damage irreparable. While this oxygen effect is important in both existing therapy and for future modalities such a radiation dose-painting, the majority of existing mathematical models for oxygen enhancement are empirical rather than based on the underlying physics and radiochemistry. Here we propose a model of oxygen-enhanced damage from physical first principles, investigating factors that might influence the cell kill. This is fitted to a range of experimental oxygen curves from literature and shown to describe them well, yielding a single robust term for oxygen interaction obtained. The model also reveals a small thermal dependency exists but that this is unlikely to be exploitable.
Likelihood ratios can refine clinical diagnosis on the basis of signs and symptoms; however, they are underused for patients' care. A likelihood ratio is the percentage of ill people with a given ...test result divided by the percentage of well individuals with the same result. Ideally, abnormal test results should be much more typical in ill individuals than in those who are well (high likelihood ratio) and normal test results should be most frequent in well people than in sick people (low likelihood ratio). Likelihood ratios near unity have little effect on decision-making; by contrast, high or low ratios can greatly shift the clinician's estimate of the probability of disease. Likelihood ratios can be calculated not only for dichotomous (positive or negative) tests but also for tests with multiple levels of results, such as creatine kinase or ventilation-perfusion scans. When combined with an accurate clinical diagnosis, likelihood ratios from ancillary tests improve diagnostic accuracy in a synergistic manner.
In biomedical science, it is a reality that many published results do not withstand deeper investigation, and there is growing concern over a replicability crisis in science. Recently, Ellipse of ...Insignificance (EOI) analysis was introduced as a tool to allow researchers to gauge the robustness of reported results in dichotomous outcome design trials, giving precise deterministic values for the degree of miscoding between events and non-events tolerable simultaneously in both control and experimental arms (Grimes, 2022). While this is useful for situations where potential miscoding might transpire, it does not account for situations where apparently significant findings might result from accidental or deliberate data redaction in either the control or experimental arms of an experiment, or from missing data or systematic redaction. To address these scenarios, we introduce Region of Attainable Redaction (ROAR), a tool that extends EOI analysis to account for situations of potential data redaction. This produces a bounded cubic curve rather than an ellipse, and we outline how this can be used to identify potential redaction through an approach analogous to EOI. Applications are illustrated, and source code, including a web-based implementation that performs EOI and ROAR analysis in tandem for dichotomous outcome trials is provided.
In optical dipole traps, the excited rotational states of a molecule may experience a very different light shift than the ground state. For particles with two polarizability components (parallel and ...perpendicular), such as linear
Σ molecules, the differential shift can be nulled by choice of elliptical polarization. When one component of the polarization vector is ±i2 times the orthogonal component, the light shift for a sublevel of excited rotational states ±approaches that of the ground state at high optical intensity. In this case, fluctuating trap intensity need not limit coherence between ground and excited rotational states.
ObjectiveCervical screening is a life-saving intervention, which reduces the incidence of and mortality from cervical cancer in the population. Human papillomavirus (HPV) based screening modalities ...hold unique promise in improving screening accuracy. HPV prevalence varies markedly by age, as does resultant cervical intraepithelial neoplasia (CIN), with higher rates recorded in younger women. With the advent of effective vaccination for HPV drastically reducing prevalence of both HPV and CIN, it is critical to model how the accuracy of different screening approaches varies with age cohort and vaccination status. This work establishes a model for the age-specific prevalence of HPV factoring in vaccine coverage and predicts how the accuracy of common screening modalities is affected by age profile and vaccine uptake.DesignModelling study of HPV infection rates by age, ascertained from European cohorts prior to the introduction of vaccination. Reductions in HPV due to vaccination were estimated from the bounds predicted from multiple modelling studies, yielding a model for age-varying HPV and CIN grades 2 and above (CIN2+) prevalence.SettingPerformance of both conventional liquid-based cytology (LBC) screening and HPV screening with LBC reflex (HPV reflex) was estimated under different simulated age cohorts and vaccination levels.ParticipantsSimulated populations of varying age and vaccination status.ResultsHPV-reflex modalities consistently result in much lower incidence of false positives than LBC testing, with an accuracy that improves even as HPV and CIN2+ rates decline.ConclusionsHPV-reflex tests outperform LBC tests across all age profiles, resulting in greater test accuracy. This improvement is especially pronounced as HPV infection rates fall and suggests HPV-reflex modalities are robust to future changes in the epidemiology of HPV.
Distributed hydrological models require extensive data amounts for driving the models and for parameterization of the land surface and subsurface. This study investigates the potential of applying ...remote sensing (RS) based input data in a hydrological model for the 350,000
km
2 Senegal River basin in West Africa. By utilizing remote sensing data to estimate precipitation, potential evapotranspiration (PET) and leaf area index (LAI) the model was driven entirely by remote sensing based data and independent of traditional meteorological data. The remote sensing retrievals were based on data from the geostationary METEOSAT-7 and the polar orbiting advanced very high resolution radiometer (AVHRR) sensors using well documented techniques.
The distributed hydrological model MIKE SHE was calibrated and validated against observed discharge for six individual subcatchments during the period 1998–2005. The model generally performed well for both root mean square error (RMSE), water balance error (WBE) and correlation coefficient (
R
2). For comparison a model based on standard meteorological driving variables was developed for a single subcatchment. The two models based on remote sensing and conventional data, respectively, exhibited similar model performances. Simulated actual evapotranspiration (AET) was compared to measurements at point scale and good agreement was obtained both on an event basis and seasonally. Although the spatial model simulations cannot be evaluated quantitatively a comparison between spatial outputs of AET from both model setups was carried out. This revealed substantial differences in the spatial patterns of AET for the examined subcatchment, in spite of similar values of predicted discharge and average AET. The potential for driving large scale hydrological models using remote sensing data was clearly demonstrated and further emphasized by the presence of long time records and near real time accessibility of the satellite data sources.
Forgettable contraception Grimes, David A
Contraception (Stoneham),
12/2009, Letnik:
80, Številka:
6
Journal Article
Recenzirano
Abstract The term “forgettable contraception” has received less attention in family planning than has “long-acting reversible contraception.” Defined here as a method requiring attention no more ...often than every 3 years, forgettable contraception includes sterilization (female or male), intrauterine devices, and implants. Five principal factors determine contraceptive effectiveness: efficacy, compliance, continuation, fecundity, and the timing of coitus. Of these, compliance and continuation dominate; the key determinants of contraceptive effectiveness are human, not pharmacological. Human nature undermines methods with high theoretical efficacy, such as oral contraceptives and injectable contraceptives. By obviating the need to think about contraception for long intervals, forgettable contraception can help overcome our human fallibility. As a result, all forgettable contraception methods provide first-tier effectiveness (≤2 pregnancies per 100 women per year) in typical use. Stated alternatively, the only class of contraceptives today with exclusively first-tier effectiveness is the one that can be started -- and then forgotten for years.
Tropical Applications of Meteorology Using Satellite Data and Ground-Based Observations (TAMSAT) rainfall monitoring products have been extended to provide spatially contiguous rainfall estimates ...across Africa. This has been achieved through a new, climatology-based calibration, which varies in both space and time. As a result, cumulative estimates of rainfall are now issued at the end of each 10-day period (dekad) at 4- km spatial resolution with pan-African coverage. The utility of the products for decision making is improved by the routine provision of validation reports, for which the 10-day (dekadal) TAMSAT rainfall estimates are compared with independent gauge observations. This paper describes the methodology by which the TAMSAT method has been applied to generate the pan-African rainfall monitoring products. It is demonstrated through comparison with gauge measurements that the method provides skillful estimates, although with a systematic dry bias. This study illustrates TAMSAT’s value as a complementary method of estimating rainfall through examples of successful operational application.
Celotno besedilo
Dostopno za:
BFBNIB, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
There is increasing awareness throughout biomedical science that many results do not withstand the trials of repeat investigation. The growing abundance of medical literature has only increased the ...urgent need for tools to gauge the robustness and trustworthiness of published science. Dichotomous outcome designs are vital in randomized clinical trials, cohort studies, and observational data for ascertaining differences between experimental and control arms. It has however been shown with tools like the fragility index (FI) that many ostensibly impactful results fail to materialize when even small numbers of patients or subjects in either the control or experimental arms are recoded from event to non-event. Critics of this metric counter that there is no objective means to determine a meaningful FI. As currently used, FI is not multidimensional and is computationally expensive. In this work, a conceptually similar geometrical approach is introduced, the ellipse of insignificance. This method yields precise deterministic values for the degree of manipulation or miscoding that can be tolerated simultaneously in both control and experimental arms, allowing for the derivation of objective measures of experimental robustness. More than this, the tool is intimately connected with sensitivity and specificity of the event/non-event tests, and is readily combined with knowledge of test parameters to reject unsound results. The method is outlined here, with illustrative clinical examples.
Science and medicine are vital to the well-being of humankind. Yet for all the incredible advances science has made, the unfortunate reality is that a worrying fraction of biological research is not reliable. Erroneous results might arise by chance or because of scientists’ mistakes or ineptitude. Very occasionally, researchers may behave unethically and fabricate or inappropriately manipulate their data.
Inevitably, this can lead to untrustworthy research that misleads scientists and the public on questions integral to our health. Indeed, a recent study showed the results of several high-profile cancer papers could not be fully replicated. This problem is not unique to cancer, and studies on various other diseases have also not stood up to scrutiny from outside investigators. Finding ways to detect dubious results is therefore essential to protect the public’s well-being and maintain public trust in science.
Here, Grimes demonstrates a new tool called the ‘Ellipse of Insignificance’ for measuring the reliability of dichotomous studies which are commonly used in many branches of biomedical sciences, including clinical trials. These studies typically compare two groups: one which was subjected to a specific treatment, and a control group which was not. Statistical methods are then applied to estimate how likely it is that differences in the number of observed events between the groups are real or due to chance.
The tool created by Grimes explores what would happen to seemingly strong results if some of the events in both the control and experimental arm of the study are recoded. It then assesses how much nudging is needed to change the statistical outcome of the experiment: the more interventions the result can withstand, the more robust the experiment. Grimes tested the tool and showed that a study suggesting a link between miscarriage and magnetic field exposure was likely unreliable because shifting the outcomes of less than two participants would change the result.
Scientists could use the Ellipse of Insignificance tool to quickly identify misleading published results or potential research fraud. Doing this could benefit researchers and protect the public from potential harm. It may also help preserve research integrity, increase transparency, and bolster public trust in science.
Readers of medical literature need to consider two types of validity, internal and external. Internal validity means that the study measured what it set out to; external validity is the ability to ...generalise from the study to the reader's patients. With respect to internal validity, selection bias, information bias, and confounding are present to some degree in all observational research. Selection bias stems from an absence of comparability between groups being studied. Information bias results from incorrect determination of exposure, outcome, or both. The effect of information bias depends on its type. If information is gathered differently for one group than for another, bias results. By contrast, non-differential misclassification tends to obscure real differences. Confounding is a mixing or blurring of effects: a researcher attempts to relate an exposure to an outcome but actually measures the effect of a third factor (the confounding variable). Confounding can be controlled in several ways: restriction, matching, stratification, and more sophisticated multivariate techniques. If a reader cannot explain away study results on the basis of selection, information, or confounding bias, then chance might be another explanation. Chance should be examined last, however, since these biases can account for highly significant, though bogus results. Differentiation between spurious, indirect, and causal associations can be difficult. Criteria such as temporal sequence, strength and consistency of an association, and evidence of a dose-response effect lend support to a causal link.