We present a new data set of attributes for 671 catchments in the contiguous United States (CONUS) minimally impacted by human activities. This complements the daily time series of meteorological ...forcing and streamflow provided by Newman et al. (2015b). To produce this extension, we synthesized diverse and complementary data sets to describe six main classes of attributes at the catchment scale: topography, climate, streamflow, land cover, soil, and geology. The spatial variations among basins over the CONUS are discussed and compared using a series of maps. The large number of catchments, combined with the diversity of the attributes we extracted, makes this new data set well suited for large-sample studies and comparative hydrology. In comparison to the similar Model Parameter Estimation Experiment (MOPEX) data set, this data set relies on more recent data, it covers a wider range of attributes, and its catchments are more evenly distributed across the CONUS. This study also involves assessments of the limitations of the source data sets used to compute catchment attributes, as well as detailed descriptions of how the attributes were computed. The hydrometeorological time series provided by Newman et al. (2015b, https://doi.org/10.5065/D6MW2F4D) together with the catchment attributes introduced in this paper (https://doi.org/10.5065/D6G73C3Q) constitute the freely available CAMELS data set, which stands for Catchment Attributes and MEteorology for Large-sample Studies.
Warming climate, melting ice, rising seas
We know that the sea level will rise as climate warms. Nevertheless, accurate projections of how much sea-level rise will occur are difficult to make based ...solely on modern observations. Determining how ice sheets and sea level have varied in past warm periods can help us better understand how sensitive ice sheets are to higher temperatures. Dutton
et al.
review recent interdisciplinary progress in understanding this issue, based on data from four different warm intervals over the past 3 million years. Their synthesis provides a clear picture of the progress we have made and the hurdles that still exist.
Science
, this issue
10.1126/science.aaa4019
Reconstructing past magnitudes, rates, and sources of sea-level rise can help project what our warmer future may hold.
BACKGROUND
Although thermal expansion of seawater and melting of mountain glaciers have dominated global mean sea level (GMSL) rise over the last century, mass loss from the Greenland and Antarctic ice sheets is expected to exceed other contributions to GMSL rise under future warming. To better constrain polar ice-sheet response to warmer temperatures, we draw on evidence from interglacial periods in the geologic record that experienced warmer polar temperatures and higher GMSLs than present. Coastal records of sea level from these previous warm periods demonstrate geographic variability because of the influence of several geophysical processes that operate across a range of magnitudes and time scales. Inferring GMSL and ice-volume changes from these reconstructions is nontrivial and generally requires the use of geophysical models.
ADVANCES
Interdisciplinary studies of geologic archives have ushered in a new era of deciphering magnitudes, rates, and sources of sea-level rise. Advances in our understanding of polar ice-sheet response to warmer climates have been made through an increase in the number and geographic distribution of sea-level reconstructions, better ice-sheet constraints, and the recognition that several geophysical processes cause spatially complex patterns in sea level. In particular, accounting for glacial isostatic processes helps to decipher spatial variability in coastal sea-level records and has reconciled a number of site-specific sea-level reconstructions for warm periods that have occurred within the past several hundred thousand years. This enables us to infer that during recent interglacial periods, small increases in global mean temperature and just a few degrees of polar warming relative to the preindustrial period resulted in ≥6 m of GMSL rise. Mantle-driven dynamic topography introduces large uncertainties on longer time scales, affecting reconstructions for time periods such as the Pliocene (~3 million years ago), when atmospheric CO
2
was ~400 parts per million (ppm), similar to that of the present. Both modeling and field evidence suggest that polar ice sheets were smaller during this time period, but because dynamic topography can cause tens of meters of vertical displacement at Earth’s surface on million-year time scales and uncertainty in model predictions of this signal are large, it is currently not possible to make a precise estimate of peak GMSL during the Pliocene.
OUTLOOK
Our present climate is warming to a level associated with significant polar ice-sheet loss in the past, but a number of challenges remain to further constrain ice-sheet sensitivity to climate change using paleo–sea level records. Improving our understanding of rates of GMSL rise due to polar ice-mass loss is perhaps the most societally relevant information the paleorecord can provide, yet robust estimates of rates of GMSL rise associated with polar ice-sheet retreat and/or collapse remain a weakness in existing sea-level reconstructions. Improving existing magnitudes, rates, and sources of GMSL rise will require a better (global) distribution of sea-level reconstructions with high temporal resolution and precise elevations and should include sites close to present and former ice sheets. Translating such sea-level data into a robust GMSL signal demands integration with geophysical models, which in turn can be tested through improved spatial and temporal sampling of coastal records.
Further development is needed to refine estimates of past sea level from geochemical proxies. In particular, paired oxygen isotope and Mg/Ca data are currently unable to provide confident, quantitative estimates of peak sea level during these past warm periods. In some GMSL reconstructions, polar ice-sheet retreat is inferred from the total GMSL budget, but identifying the specific ice-sheet sources is currently hindered by limited field evidence at high latitudes. Given the paucity of such data, emerging geochemical and geophysical techniques show promise for identifying the sectors of the ice sheets that were most vulnerable to collapse in the past and perhaps will be again in the future.
Peak global mean temperature, atmospheric CO
2
, maximum global mean sea level (GMSL), and source(s) of meltwater.
Light blue shading indicates uncertainty of GMSL maximum. Red pie charts over Greenland and Antarctica denote fraction (not location) of ice retreat.
Interdisciplinary studies of geologic archives have ushered in a new era of deciphering magnitudes, rates, and sources of sea-level rise from polar ice-sheet loss during past warm periods. Accounting for glacial isostatic processes helps to reconcile spatial variability in peak sea level during marine isotope stages 5e and 11, when the global mean reached 6 to 9 meters and 6 to 13 meters higher than present, respectively. Dynamic topography introduces large uncertainties on longer time scales, precluding robust sea-level estimates for intervals such as the Pliocene. Present climate is warming to a level associated with significant polar ice-sheet loss in the past. Here, we outline advances and challenges involved in constraining ice-sheet sensitivity to climate change with use of paleo–sea level records.
The SILCC (SImulating the Life-Cycle of molecular Clouds) project aims to self-consistently understand the small-scale structure of the interstellar medium (ISM) and its link to galaxy evolution. We ...simulate the evolution of the multiphase ISM in a (500 pc)2 × ±5 kpc region of a galactic disc, with a gas surface density of
$\Sigma _{_{\rm GAS}} = 10 \;{\rm M}_{\odot }\,{\rm pc}^{-2}$
. The flash 4 simulations include an external potential, self-gravity, magnetic fields, heating and radiative cooling, time-dependent chemistry of H2 and CO considering (self-) shielding, and supernova (SN) feedback but omit shear due to galactic rotation. We explore SN explosions at different rates in high-density regions (peak), in random locations with a Gaussian distribution in the vertical direction (random), in a combination of both (mixed), or clustered in space and time (clus/clus2). Only models with self-gravity and a significant fraction of SNe that explode in low-density gas are in agreement with observations. Without self-gravity and in models with peak driving the formation of H2 is strongly suppressed. For decreasing SN rates, the H2 mass fraction increases significantly from <10 per cent for high SN rates, i.e. 0.5 dex above Kennicutt–Schmidt, to 70–85 per cent for low SN rates, i.e. 0.5 dex below KS. For an intermediate SN rate, clustered driving results in slightly more H2 than random driving due to the more coherent compression of the gas in larger bubbles. Magnetic fields have little impact on the final disc structure but affect the dense gas (n ≳ 10 cm−3) and delay H2 formation. Most of the volume is filled with hot gas (∼80 per cent within ±150 pc). For all but peak driving a vertically expanding warm component of atomic hydrogen indicates a fountain flow. We highlight that individual chemical species populate different ISM phases and cannot be accurately modelled with temperature-/density-based phase cut-offs.
In hydrology, two somewhat competing philosophies form the basis of most process-based models. At one endpoint of this continuum are detailed, high-resolution descriptions of small-scale processes ...that are numerically integrated to larger scales (e.g. catchments). At the other endpoint of the continuum are spatially lumped representations of the system that express the hydrological response via, in the extreme case, a single linear transfer function. Many other models, developed starting from these two contrasting endpoints, plot along this continuum with different degrees of spatial resolutions and process complexities. A better understanding of the respective basis as well as the respective shortcomings of different modelling philosophies has the potential to improve our models. In this paper we analyse several frequently communicated beliefs and assumptions to identify, discuss and emphasize the functional similarity of the seemingly competing modelling philosophies. We argue that deficiencies in model applications largely do not depend on the modelling philosophy, although some models may be more suitable for specific applications than others and vice versa, but rather on the way a model is implemented. Based on the premises that any model can be implemented at any desired degree of detail and that any type of model remains to some degree conceptual, we argue that a convergence of modelling strategies may hold some value for advancing the development of hydrological models.
This article reviews studies demonstrating enhancement with transcranial direct current stimulation (tDCS) of attention, learning, and memory processes in healthy adults. Given that these are ...fundamental cognitive functions, they may also mediate stimulation effects on other higher-order processes such as decision-making and problem solving. Although tDCS research is still young, there have been a variety of methods used and cognitive processes tested. While these different methods have resulted in seemingly contradictory results among studies, many consistent and noteworthy effects of tDCS on attention, learning, and memory have been reported. The literature suggests that although tDCS as typically applied may not be as useful for localization of function in the brain as some other methods of brain stimulation, tDCS may be particularly well-suited for practical applications involving the enhancement of attention, learning, and memory, in both healthy subjects and in clinical populations.
•Neuroenhancement with tDCS is reviewed.•A variety of tDCS methods produce similar cognitive effects.•Beneficial effects on attention, learning, and memory have been found.•tDCS may be particularly well-suited for neuroenhancement.
Low calcium intake may adversely affect bone health in adults. Recognizing the presence of low calcium intake is necessary to develop national strategies to optimize intake. To highlight regions ...where calcium intake should be improved, we systematically searched for the most representative national dietary calcium intake data in adults from the general population in all countries. We searched 13 electronic databases and requested data from domain experts. Studies were double-screened for eligibility. Data were extracted into a standard form. We developed an interactive global map, categorizing countries based on average calcium intake and summarized differences in intake based on sex, age, and socioeconomic status. Searches yielded 9780 abstracts. Across the 74 countries with data, average national dietary calcium intake ranges from 175 to 1233 mg/day. Many countries in Asia have average dietary calcium intake less than 500 mg/day. Countries in Africa and South America mostly have low calcium intake between about 400 and 700 mg/day. Only Northern European countries have national calcium intake greater than 1000 mg/day. Survey data for three quarters of available countries were not nationally representative. Average calcium intake is generally lower in women than men, but there are no clear patterns across countries regarding relative calcium intake by age, sex, or socioeconomic status. The global calcium map reveals that many countries have low average calcium intake. But recent, nationally representative data are mostly lacking. This review draws attention to regions where measures to increase calcium intake are likely to have skeletal benefits.
Abstract
We present 3D ‘zoom-in’ simulations of the formation of two molecular clouds out of the galactic interstellar medium. We model the clouds – identified from the SILCC simulations – with a ...resolution of up to 0.06 pc using adaptive mesh refinement in combination with a chemical network to follow heating, cooling and the formation of H2 and CO including (self-) shielding. The two clouds are assembled within a few million years with mass growth rates of up to ∼10−2 M⊙ yr−1 and final masses of ∼50 000 M⊙. A spatial resolution of ≲0.1 pc is required for convergence with respect to the mass, velocity dispersion and chemical abundances of the clouds, although these properties also depend on the cloud definition such as based on density thresholds, H2 or CO mass fraction. To avoid grid artefacts, the progressive increase of resolution has to occur within the free-fall time of the densest structures (1–1.5 Myr) and ≳200 time-steps should be spent on each refinement level before the resolution is progressively increased further. This avoids the formation of spurious, large-scale, rotating clumps from unresolved turbulent flows. While CO is a good tracer for the evolution of dense gas with number densities n ≥ 300 cm−3, H2 is also found for n ≲ 30 cm−3 due to turbulent mixing and becomes dominant at column densities around 30–50 M⊙ pc−2. The CO-to-H2 ratio steadily increases within the first 2 Myr, whereas XCO ≃ 1–4 × 1020 cm−2 (K km s−1)−1 is approximately constant since the CO(1−0) line quickly becomes optically thick.
Like many urban catchments, the River Thames in London is contaminated with plastics. This pollutant is recorded on the river banks, in the benthic environment and in the water column. The present ...study was conducted to assess the extent of microplastic ingestion in two River Thames fish species, the European flounder (Platichthys flesus) and European smelt (Osmerus eperlanus). Samples were collected from two sites in Kent, England; Erith and Isle of Grain/Sheppey, near Sheerness, with the latter being more estuarine. The results revealed that up to 75% of sampled European flounder had plastic fibres in the gut compared with only 20% of smelt. This difference may be related to their diverse feeding behaviours: European flounder are benthic feeders whilst European smelt are pelagic predators. The fibres were predominantly red or black polyamides and other fibres included acrylic, nylon, polyethylene and polyethylene terephthalate and there was no difference in occurrence between the sites sampled.
Percentage of fish - The first bar (green) represents the percentage of the sampled fish that had one or more plastic fibres in each sample. The subsequent bars (red, black, blue, clear) show the percentage of the sampled digestive tracts at each site which contained the different colour fibres. Some fish ingested several different coloured fibres and thus the sums of the bars does not equate to 100%. (b) Percentage of fibres – the percentage of fibres that were red, black, blue and clear in each sample. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Display omitted
•Plastic fibres were found in Thames fish.•Up to 75% of flounder ingested plastic fibres.•Most fibres were black.•Fibres were identified as acrylic, nylon, polyethelene, PET and polyamides.
This study is the first to report of microplastic ingestion by estuarine organisms in the River Thames.
We describe spatial patterns in environmental injustice and inequality for residential outdoor nitrogen dioxide (NO2) concentrations in the contiguous United States. Our approach employs Census ...demographic data and a recently published high-resolution dataset of outdoor NO2 concentrations. Nationally, population-weighted mean NO2 concentrations are 4.6 ppb (38%, p<0.01) higher for nonwhites than for whites. The environmental health implications of that concentration disparity are compelling. For example, we estimate that reducing nonwhites' NO2 concentrations to levels experienced by whites would reduce Ischemic Heart Disease (IHD) mortality by ∼7,000 deaths per year, which is equivalent to 16 million people increasing their physical activity level from inactive (0 hours/week of physical activity) to sufficiently active (>2.5 hours/week of physical activity). Inequality for NO2 concentration is greater than inequality for income (Atkinson Index: 0.11 versus 0.08). Low-income nonwhite young children and elderly people are disproportionately exposed to residential outdoor NO2. Our findings establish a national context for previous work that has documented air pollution environmental injustice and inequality within individual US metropolitan areas and regions. Results given here can aid policy-makers in identifying locations with high environmental injustice and inequality. For example, states with both high injustice and high inequality (top quintile) for outdoor residential NO2 include New York, Michigan, and Wisconsin.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
The Asteroid Terrestrial impact Last Alert System (ATLAS) system consists of two 0.5 m Schmidt telescopes with cameras covering 29 square degrees at plate scale of 1.86 arcsec per pixel. Working in ...tandem, the telescopes routinely survey the whole sky visible from Hawaii (above δ > − 50 ° ) every two nights, exposing four times per night, typically reaching o < 19 magnitude per exposure when the moon is illuminated and c < 19.5 magnitude per exposure in dark skies. Construction is underway of two further units to be sited in Chile and South Africa which will result in an all-sky daily cadence from 2021. Initially designed for detecting potentially hazardous near earth objects, the ATLAS data enable a range of astrophysical time domain science. To extract transients from the data stream requires a computing system to process the data, assimilate detections in time and space and associate them with known astrophysical sources. Here we describe the hardware and software infrastructure to produce a stream of clean, real, astrophysical transients in real time. This involves machine learning and boosted decision tree algorithms to identify extragalactic and Galactic transients. Typically we detect 10-15 supernova candidates per night which we immediately announce publicly. The ATLAS discoveries not only enable rapid follow-up of interesting sources but will provide complete statistical samples within the local volume of 100 Mpc. A simple comparison of the detected supernova rate within 100 Mpc, with no corrections for completeness, is already significantly higher (factor 1.5 to 2) than the current accepted rates.