The authors describe the design and implementation of a large multiethnic cohort established to study diet and cancer in the United States. They detail the source of the subjects, sample size, ...questionnaire development, pilot work, and approaches to future analyses. The cohort consists of 215,251 adult men and women (age 45–75 years at baseline) living in Hawaii and in California (primarily Los Angeles County) with the following ethnic distribution: African-American (16.3%), Latino (22.0%), Japanese-American (26.4%), Native Hawaiian (6.5%), White (22.9%), and other ancestry (5.8%). From 1993 to 1996, participants entered the cohort by completing a 26-page, self-administered mail questionnaire that elicited a quantitative food frequency history, along with demographic and other information. Response rates ranged from 20% in Latinos to 49% in Japanese-Americans. As expected, both within and among ethnic groups, the questionnaire data show substantial variations in dietary intakes (nutrients as well as foods) and in the distributions of non-dietary risk factors (including smoking, alcohol consumption, obesity, and physical activity). When compared with corresponding ethnic-specific cancer incidence rates, the findings provide tentative support for several current dietary hypotheses. As sufficient numbers of cancer cases are identified through surveillance of the cohort, dietary and other hypotheses will be tested in prospective analyses. Am J Epidemiol 2000;151:346–57.
In this work, a new clustering algorithm especially geared towards merging data arising from multiple sensors is presented. The algorithm, called PN-EAC, is based on the ensemble clustering paradigm ...and it introduces the novel concept of negative evidence. PN-EAC combines both positive evidence, to gather information about the elements that should be grouped together in the final partition, and negative evidence, which has information about the elements that should not be grouped together. The algorithm has been validated in the electrocardiographic domain for heartbeat clustering, extracting positive evidence from the heartbeat morphology and negative evidence from the distances between heartbeats. The best result obtained on the MIT-BIH Arrhythmia database yielded an error of 1.44%. In the St. Petersburg Institute of Cardiological Technics 12-Lead Arrhythmia Database database (INCARTDB), an error of 0.601% was obtained when using two electrocardiogram (ECG) leads. When increasing the number of leads to 4, 6, 8, 10 and 12, the algorithm obtains better results (statistically significant) than with the previous number of leads, reaching an error of 0.338%. To the best of our knowledge, this is the first clustering algorithm that is able to process simultaneously any number of ECG leads. Our results support the use of PN-EAC to combine different sources of information and the value of the negative evidence.
We document the implementation of the Common Representative Intermediates Mechanism version 2, reduction 5 into the United Kingdom Chemistry and Aerosol model (UKCA) version 10.9. The mechanism is ...merged with the stratospheric chemistry already used by the StratTrop mechanism, as used in UKCA and the UK Earth System Model, to create a new CRI‐Strat mechanism. CRI‐Strat simulates a more comprehensive treatment of non‐methane volatile organic compounds (NMVOCs) and provides traceability with the Master Chemical Mechanism. In total, CRI‐Strat simulates the chemistry of 233 species competing in 613 reactions (compared to 87 species and 305 reactions in the existing StratTrop mechanism). However, while more than twice as complex than StratTrop, the new mechanism is only 75% more computationally expensive. CRI‐Strat is evaluated against an array of in situ and remote sensing observations and simulations using the StratTrop mechanism in the UKCA model. It is found to increase production of ozone near the surface, leading to higher ozone concentrations compared to surface observations. However, ozone loss is also greater in CRI‐Strat, leading to less ozone away from emission sources and a similar tropospheric ozone burden compared to StratTrop. CRI‐Strat also produces more carbon monoxide than StratTrop, particularly downwind of biogenic VOC emission sources, but has lower burdens of nitrogen oxides as more is converted into reservoir species. The changes to tropospheric ozone and nitrogen budgets are sensitive to the treatment of NMVOC emissions, highlighting the need to reduce uncertainty in these emissions to improve representation of tropospheric chemical composition.
Plain Language Summary
To understand the climate and predict how it will change in the future, we need to understand its chemical composition—the trace gases and small particles that exist in tiny quantities in the atmosphere. A key tool we use to do this are computer models which simulate the atmosphere and processes within it. Key processes include the formation of ozone, a harmful pollutant and greenhouse gas in the lower atmosphere. However, the chemistry involved in forming ozone is very complicated, so computer simulations of the atmosphere must greatly simplify the chemistry. These simple schemes may introduce errors in the model. We also have much more complex chemical mechanisms which simulate our best understanding of all chemical reactions, but these complex schemes require too much computational power to be used when simulating the whole atmosphere. In this paper, we describe the implementation of a chemical mechanism that sits between these levels of complexity, realistically simulating the formation and destruction of ozone without being too slow to run. We compare this new mechanism against measurements taken of the atmosphere and the preexisting, simpler chemical mechanism and show that the new mechanism greatly enhances the amount of ozone that is produced.
Key Points
The CRI‐Strat mechanism has been integrated into the United Kingdom Chemistry and Aerosol model, greatly increasing the complexity of volatile organic compound chemistry compared to StratTrop
CRI‐Strat simulates higher surface ozone compared to StratTrop due to greater production, but tropospheric ozone burden is similar
The ozone and oxidized nitrogen budgets when running with the CRI‐Strat mechanism show high sensitivity to the input non‐methane volatile organic compound emissions
Tuberculosis (TB) is a chronic communicable bacterial disease caused by Mycobacterium tuberculosis complex (MTBC) species. M. tuberculosis is the main causative agent of human TB, and cattle are the ...primary host of Mycobacterium bovis; due to close interaction between cattle and humans, M. bovis poses a zoonotic risk. This review summarizes and estimates the prevalence of M. bovis infection among human cases. Studies reporting TB prevalence data that were published in English during 10 years from 20 April 2009 to 17 April 2019 were identified through search of PubMed and other sources. Quality of studies and risk of bias were assessed using standard tools for prevalence study reports. Characteristics of included studies and their main findings were summarized in tables and discussed with narrative syntheses. Meta‐analysis was performed on 19 included studies, with a total of 7,185 MTBC isolates identified; 702 (9.7%) of them were characterized as of subspecies M. bovis, but there was a large prevalence difference between the studies, ranging from 0.4% to 76.7%. The genotyping‐based studies reported significantly lower prevalence of zoonotic TB than did the studies based on older techniques. The overall pooled prevalence of M. bovis aggregated from all included studies was 12.1% of the total MTBC isolates, while the corresponding pooled figure from the 14 genotyping‐based studies was only 1.4%. Generally, human M. bovis cases reported from different countries of the world suggest that the impact of zoonotic TB is still important in all regions. However, it was difficult to understand the true picture of the disease prevalence because of methodological differences. Future investigations on zoonotic TB should carefully consider these differences when evaluating prevalence results.
We examine the effects of ozone precursor emissions from megacities on present-day air quality using the global chemistry–climate model UM-UKCA (UK Met Office Unified Model coupled to the UK ...Chemistry and Aerosols model). The sensitivity of megacity and regional ozone to local emissions, both from within the megacity and from surrounding regions, is important for determining air quality across many scales, which in turn is key for reducing human exposure to high levels of pollutants. We use two methods, perturbation and tagging, to quantify the impact of megacity emissions on global ozone. We also completely redistribute the anthropogenic emissions from megacities, to compare changes in local air quality going from centralised, densely populated megacities to decentralised, lower density urban areas. Focus is placed not only on how changes to megacity emissions affect regional and global NOx and O3, but also on changes to NOy deposition and to local chemical environments which are perturbed by the emission changes. The perturbation and tagging methods show broadly similar megacity impacts on total ozone, with the perturbation method underestimating the contribution partially because it perturbs the background chemical environment. The total redistribution of megacity emissions locally shifts the chemical environment towards more NOx-limited conditions in the megacities, which is more conducive to ozone production, and monthly mean surface ozone is found to increase up to 30% in megacities, depending on latitude and season. However, the displacement of emissions has little effect on the global annual ozone burden (0.12% change). Globally, megacity emissions are shown to contribute ~3% of total NOy deposition. The changes in O3, NOx and NOy deposition described here are useful for quantifying megacity impacts and for understanding the sensitivity of megacity regions to local emissions. The small global effects of the 100% redistribution carried out in this study suggest that the distribution of emissions on the local scale is unlikely to have large implications for chemistry–climate processes on the global scale.
Injection of aerosol particles (or their precursors) into the stratosphere to scatter solar radiation back into space has been suggested as a solar-radiation management scheme for the mitigation of ...global warming. TiO2 has recently been highlighted as a possible candidate particle because of its high refractive index, but its impact on stratospheric chemistry via heterogeneous reactions is as yet unknown. In this work the heterogeneous reaction of airborne sub-micrometre TiO2 particles with N2 O5 has been investigated for the first time, at room temperature and different relative humidities (RH), using an atmospheric pressure aerosol flow tube. The uptake coefficient of N2 O5 onto TiO2 , γ(N2 O5 ), was determined to be ~1.0 × 10-3 at low RH, increasing to ~3 × 10-3 at 60% RH. The uptake of N2 O5 onto TiO2 is then included in the UKCA chemistry-climate model to assess the impact of this reaction on stratospheric chemistry. While the impact of TiO2 on the scattering of solar radiation is chosen to be similar to the aerosol from the Mt Pinatubo eruption, the impact of TiO2 injection on stratospheric N2 O5 is much smaller.
The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be ...indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, new and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. No neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section,
σ
A
v
, for WIMP masses ranging from 30 GeV up to 10 TeV, assuming cuspy (NFW) and flat-cored (Burkert) dark matter halo profiles, reaching down to
≃
4
·
10
-
24
cm
3
s
-
1
, and
≃
2.6
·
10
-
23
cm
3
s
-
1
for the
ν
ν
¯
channel, respectively.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
A global chemistry‐climate model is used to assess the impact on atmospheric composition of the regeneration and recycling of HOx in the photo‐oxidation of isoprene. The impact is explored subject to ...present‐day, pre‐industrial and future climate/emission scenarios. Our calculations show that, in all cases, the inclusion of uni‐molecular isomerisations of the isoprene hydroxy‐peroxy radicals leads to enhanced production of HOx radicals and ozone. The global burden of ozone increases by 25–36 Tg (8–18%), depending on the climate/emissions scenario, whilst the changes in OH lead to decreases in the methane lifetime of between 11% in the future and 35% in the pre‐industrial. Critically the size of the change in methane lifetime depends on the VOC/NOx emission ratio. The results of the present‐day calculations suggest a certain amount of parameter refinement is still needed to reconcile the updated chemistry with field observations (particularly for HO2+RO2). However, the updated chemistry could have far‐reaching implications for: future‐climate predictions; projections of future oxidising capacity; and our understanding of past changes in oxidising capacity.
We use a stratosphere–troposphere composition–climate model with interactive sulfur chemistry and aerosol microphysics, to investigate the effect of the 1991 Mount Pinatubo eruption on stratospheric ...aerosol properties. Satellite measurements indicate that shortly after the eruption, between 14 and 23 Tg of SO2 (7 to 11.5 Tg of sulfur) was present in the tropical stratosphere. Best estimates of the peak global stratospheric aerosol burden are in the range 19 to 26 Tg, or 3.7 to 6.7 Tg of sulfur assuming a composition of between 59 and 77 % H2SO4. In light of this large uncertainty range, we performed two main simulations with 10 and 20 Tg of SO2 injected into the tropical lower stratosphere. Simulated stratospheric aerosol properties through the 1991 to 1995 period are compared against a range of available satellite and in situ measurements. Stratospheric aerosol optical depth (sAOD) and effective radius from both simulations show good qualitative agreement with the observations, with the timing of peak sAOD and decay timescale matching well with the observations in the tropics and mid-latitudes. However, injecting 20 Tg gives a factor of 2 too high stratospheric aerosol mass burden compared to the satellite data, with consequent strong high biases in simulated sAOD and surface area density, with the 10 Tg injection in much better agreement. Our model cannot explain the large fraction of the injected sulfur that the satellite-derived SO2 and aerosol burdens indicate was removed within the first few months after the eruption. We suggest that either there is an additional alternative loss pathway for the SO2 not included in our model (e.g. via accommodation into ash or ice in the volcanic cloud) or that a larger proportion of the injected sulfur was removed via cross-tropopause transport than in our simulations. We also critically evaluate the simulated evolution of the particle size distribution, comparing in detail to balloon-borne optical particle counter (OPC) measurements from Laramie, Wyoming, USA (41° N). Overall, the model captures remarkably well the complex variations in particle concentration profiles across the different OPC size channels. However, for the 19 to 27 km injection height-range used here, both runs have a modest high bias in the lowermost stratosphere for the finest particles (radii less than 250 nm), and the decay timescale is longer in the model for these particles, with a much later return to background conditions. Also, whereas the 10 Tg run compared best to the satellite measurements, a significant low bias is apparent in the coarser size channels in the volcanically perturbed lower stratosphere. Overall, our results suggest that, with appropriate calibration, aerosol microphysics models are capable of capturing the observed variation in particle size distribution in the stratosphere across both volcanically perturbed and quiescent conditions. Furthermore, additional sensitivity simulations suggest that predictions with the models are robust to uncertainties in sub-grid particle formation and nucleation rates in the stratosphere.
Atmospheric chemistry is driven by photolytic reactions, making their modelling a crucial component of atmospheric models. We describe the implementation and validation of Fast-JX, a state of the art ...model of interactive photolysis, into the MetUM chemistry-climate model. This allows for interactive photolysis rates to be calculated in the troposphere and augments the calculation of the rates in the stratosphere by accounting for clouds and aerosols in addition to ozone. In order to demonstrate the effectiveness of this new photolysis scheme we employ new methods of validating the model, including techniques for sampling the model to compare to flight track and satellite data.