•WHAM-FTOXβ was developed to help predict proton and metal toxic effects in the field.•The model is based on the binding of cations at sites of living organisms.•Toxic effects depend separately on ...metal potency and species sensitivity.•Metal potencies are related to hard and soft chemical properties.•Evidence is obtained for intrinsic (or common relative) species sensitivity.
We developed a model that quantifies aquatic cationic toxicity by a combination of the intrinsic toxicities of metals and protons and the intrinsic sensitivities of the test species. It is based on the WHAM-FTOX model, which combines the calculated binding of cations by the organism with toxicity coefficients (αH, αM) to estimate the variable FTOX, a measure of toxic effect; the key parameter αM,max (applying at infinite time) depends upon both the metal and the test species. In our new model, WHAM-FTOXβ, values of αM,max are given by the product αM* × β, where αM* has a single value for each metal, and β a single value for each species. To parameterise WHAM-FTOXβ, we assembled a set of 2182 estimates of αM,max obtained by applying the basic model to laboratory toxicity data for 76 different test species, covering 15 different metals, and including results for metal mixtures. Then we fitted the log10αM,max values with αM* and β values (a total of 91 parameters). The resulting model accounted for 72% of the variance in log10αM,max. The values of αM* increased markedly as the chemical character of the metal changed from hard (average αM* = 4.4) to intermediate (average αM* = 25) to soft (average αM* = 560). The values of log10β were normally distributed, with a 5–95 percentile range of -0.73 to +0.56, corresponding to β values of 0.18 to 3.62. The WHAM-FTOXβ model entails the assumption that test species exhibit common relative sensitivity, i.e. the ratio αM,max / αM* is constant across all metals. This was tested with data from studies in which the toxic responses of a single organism towards two or more metals had been measured (179 examples for the most-tested metals Ni, Cu, Zn, Ag, Cd, Pb), and statistically-significant (p < 0.003) results were obtained.
We compiled published and newly-obtained data on the directly-measured atmospheric deposition of total phosphorus (TP), filtered total phosphorus (FTP), and inorganic phosphorus (PO4-P) to open land, ...lakes, and marine coasts. The resulting global data base includes data for c. 250 sites, covering the period 1954 to 2012. Most (82%) of the measurement locations are in Europe and North America, with 44 in Africa, Asia, Oceania, and South-Central America. The deposition rates are log-normally distributed, and for the whole data set the geometric mean deposition rates are 0.027, 0.019 and 0.14 g m(-2) a(-1) for TP, FTP and PO4-P respectively. At smaller scales there is little systematic spatial variation, except for high deposition rates at some sites in Germany, likely due to local agricultural sources. In cases for which PO4-P was determined as well as one of the other forms of P, strong parallels between logarithmic values were found. Based on the directly-measured deposition rates to land, and published estimates of P deposition to the oceans, we estimate a total annual transfer of P to and from the atmosphere of 3.7 Tg. However, much of the phosphorus in larger particles (principally primary biological aerosol particles) is probably redeposited near to its origin, so that long-range transport, important for tropical forests, large areas of peatland and the oceans, mainly involves fine dust from deserts and soils, as described by the simulations of Mahowald et al. (Global Biogeochemical Cycles 22, GB4026, 2008). We suggest that local release to the atmosphere and subsequent deposition bring about a pseudo-diffusive redistribution of P in the landscape, with P-poor ecosystems, for example ombrotrophic peatlands and oligotrophic lakes, gaining at the expense of P-rich ones. Simple calculations suggest that atmospheric transport could bring about significant local redistribution of P among terrestrial ecosystems. Although most atmospherically transported P is natural in origin, local transfers from fertilised farmland to P-poor ecosystems may be significant, and this requires further research.
•Metal accumulation by living organisms is successfully simulated with WHAM.•Modelled organism-bound metal provides a measure of toxic exposure.•The toxic potency of individual bound metals is ...quantified by fitting toxicity data.•Eleven laboratory mixture toxicity data sets were parameterised.•Relatively little variability amongst individual test organisms is indicated.
The WHAM-FTOX model describes the combined toxic effects of protons and metal cations towards aquatic organisms through the toxicity function (FTOX), a linear combination of the products of organism-bound cation and a toxic potency coefficient (αi) for each cation. Organism-bound, metabolically-active, cation is quantified by the proxy variable, amount bound by humic acid (HA), as predicted by the WHAM chemical speciation model. We compared published measured accumulations of metals by living organisms (bacteria, algae, invertebrates) in different solutions, with WHAM predictions of metal binding to humic acid in the same solutions. After adjustment for differences in binding site density, the predictions were in reasonable line with observations (for logarithmic variables, r2=0.89, root mean squared deviation=0.44), supporting the use of HA binding as a proxy. Calculated loadings of H+, Al, Cu, Zn, Cd, Pb and UO2 were used to fit observed toxic effects in 11 published mixture toxicity experiments involving bacteria, macrophytes, invertebrates and fish. Overall, WHAM-FTOX gave slightly better fits than a conventional additive model based on solution concentrations. From the derived values of αi, the toxicity of bound cations can tentatively be ranked in the order: H<Al<(Zn–Cu–Pb–UO2)<Cd. The WHAM-FTOX analysis indicates much narrower ranges of differences amongst individual organisms in metal toxicity tests than was previously thought. The model potentially provides a means to encapsulate knowledge contained within laboratory data, thereby permitting its application to field situations.
Soil organic matter (SOM) is a major ecosystem component, central to soil fertility, carbon balance and other soil functions. To advance SOM modelling, we devised a steady‐state model of topsoil SOM, ...with explicit descriptions of physical states and properties, and used it to simulate SOM concentration, carbon:nitrogen:phosphorus (C:N:P) stoichiometry, bulk density and radiocarbon content. The model classifies SOM by element stoichiometry (αSOM is poor in N and P, βSOM is rich), mean residence times (1–2000 years) and physical state (free, occluded, adsorbed, hypoxic). The most stable SOM is either βSOM preferentially adsorbed by mineral matter, or αSOM in strongly hypoxic zones. Soil properties were simulated for random combinations of plant litter input (amount and C:N:P stoichiometry), mineral sorption capacity, propensity for hypoxia, and bulk density of non‐adsorbed αSOM. To optimize model parameters, outputs from 5000 simulations were used to construct bivariate relations among soil variables, which were compared with those found in data for 835 survey sites, covering all common land uses. The bivariate relations, and patterns of data scatter, were reproduced, and also variations in soil radiocarbon with soil type, suggesting that apparent scatter in measured data might reflect SOM diversity. The temporal acquisition by soil of ‘bomb 14C’ could also be simulated. The steady‐state model is the basis for a dynamic version, suitable for simulating changes in SOM through time. It provides insight into the possible manipulation of soil organic carbon (SOC) sequestration; for example, increasing litter inputs might only increase moderately‐stable SOC pools, whereas encouraging the creation of βSOM by adsorption to mineral matter from deeper soil could lead to long‐term stabilization.
Highlights
Models of SOM should include explicit descriptions of physical states and properties.
Our new topsoil SOM model is constrained by C:N:P stoichiometry, SO14C, and physical fractionation data.
Simulated soil properties, randomly generated, account for measured trends and patterns of scatter in SOM data.
SOM properties depend upon litter input, interactions with mineral matter, hypoxia, and bulk density.
Fertilization of nitrogen (N)-limited ecosystems by anthropogenic atmospheric nitrogen deposition (N
) may promote CO
removal from the atmosphere, thereby buffering human effects on global radiative ...forcing. We used the biogeochemical ecosystem model N14CP, which considers interactions among C (carbon), N and P (phosphorus), driven by a new reconstruction of historical N
, to assess the responses of soil organic carbon (SOC) stocks in British semi-natural landscapes to anthropogenic change. We calculate that increased net primary production due to N
has enhanced detrital inputs of C to soils, causing an average increase of 1.2 kgCm
(c. 10%) in soil SOC over the period 1750-2010. The simulation results are consistent with observed changes in topsoil SOC concentration in the late 20
Century, derived from sample-resample measurements at nearly 2000 field sites. More than half (57%) of the additional topsoil SOC is predicted to have a short turnover time (c. 20 years), and will therefore be sensitive to future changes in N
. The results are the first to validate model predictions of N
effects against observations of SOC at a regional field scale. They demonstrate the importance of long-term macronutrient interactions and the transitory nature of soil responses in the terrestrial C cycle.
Understanding forecast reconciliation Hollyman, Ross; Petropoulos, Fotios; Tipping, Michael E.
European journal of operational research,
10/2021, Volume:
294, Issue:
1
Journal Article
Peer reviewed
Open access
•We relate recent literature on Forecast Reconciliation to the extensive body of work on Forecast Combination.•We demonstrate how the linear constraints which naturally apply to the data can be used ...to generate indirect forecasts of each time-series. These are then combined with direct forecasts to improve forecast accuracy.•The techniques described are generally applicable beyond the hierarchical setting and can improve forecast accuracy in any multivariate forecasting scenario where time-series are subject to linear constraints.•We demonstrate significant improvements in forecast accuracy in the noisiest and hardest to forecast time-series.
A series of recent papers introduce the concept of Forecast Reconciliation, a process by which independently generated forecasts of a collection of linearly related time series are reconciled via the introduction of accounting aggregations that naturally apply to the data. Aside from its clear presentational and operational virtues, the reconciliation approach generally improves the accuracy of the combined forecasts. In this paper, we examine the mechanisms by which this improvement is generated by re-formulating the reconciliation problem as a combination of direct forecasts of each time series with additional indirect forecasts derived from the linear constraints. Our work establishes a direct link between the nascent Forecast Reconciliation literature and the extensive work on Forecast Combination. In the original hierarchical setting, our approach clarifies for the first time how unbiased forecasts for the entire collection can be generated from base forecasts made at any level of the hierarchy, and we illustrate more generally how simple robust combined forecasts can be generated in any multivariate setting subject to linear constraints. In an empirical example, we show that simple combinations of such forecasts generate significant improvements in forecast accuracy where it matters most: where noise levels are highest and the forecasting task is at its most challenging.
Probabilistic Principal Component Analysis Tipping, Michael E.; Bishop, Christopher M.
Journal of the Royal Statistical Society. Series B, Statistical methodology,
1999, Volume:
61, Issue:
3
Journal Article
Peer reviewed
Open access
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based on a probability model. We demonstrate how the principal axes of a set of ...observed data vectors may be determined through maximum likelihood estimation of parameters in a latent variable model that is closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss, with illustrative examples, the advantages conveyed by this probabilistic approach to PCA.
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing, and visualizing data, although its effectiveness is limited by its global linearity. While ...nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Therefore, previous attempts to formulate mixture models for PCA have been ad hoc to some extent. In this article, PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectation-maximization algorithm. We discuss the advantages of this model in the context of clustering, density modeling, and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.
•Laboratory toxicity testing data for metals used to parameterize WHAM-FTOX.•Lake zooplankton richness data fitted by optimizing species sensitivity distribution.•Identification of toxic cations ...responsible for decreases in species richness.
The WHAM-FTOX model quantifies cation toxicity towards freshwater organisms, assuming an additive toxic response to the amounts of protons and metals accumulated by an organism. We combined a parameterization of the model, using data from multi-species laboratory toxicity tests, with a fitted field species sensitivity distribution, to simulate the species richness (nsp) of crustacean zooplankton in acid- and metal-contaminated lakes near Sudbury, Ontario over several decades, and also in reference (uncontaminated) lakes. A good description of variation in toxic response among the zooplankton species was achieved with a log-normal distribution of a new parameter, β, which characterizes an organism’s intrinsic sensitivity towards toxic cations; the greater is β, the more sensitive is the species. The use of β assumes that while species vary in their sensitivity, the relative toxicities of different metals are the same for each species (common relative sensitivity). Unbiased agreements between simulated and observed nsp were obtained with a high correlation (r2 = 0.81, p < 0.0001, n = 217). Variations in zooplankton species richness in the Sudbury lakes are calculated to be dominated by toxic responses to H, Al, Cu and Ni, with a small contribution from Zn, and negligible effects of Cd, Hg and Pb. According to the model, some of the Sudbury lakes were affected predominantly by acidification (H and Al), while others were most influenced by toxic heavy metals (Ni, Cu, Zn); for lakes in the latter category, the relative importance of heavy metals, compared to H and Al, has increased over time. The results suggest that, if common relative sensitivity operates, nsp can be modelled on the basis of a single set of parameters characterizing the average toxic effects of different cations, together with a species sensitivity distribution.