The O-acetylation of polysaccharides is a common modification used by pathogenic organisms to protect against external forces. Pseudomonas aeruginosa secretes the anionic, O-acetylated ...exopolysaccharide alginate during chronic infection in the lungs of cystic fibrosis patients to form the major constituent of a protective biofilm matrix. Four proteins have been implicated in the O-acetylation of alginate, AlgIJF and AlgX. To probe the biological function of AlgJ, we determined its structure to 1.83 Å resolution. AlgJ is a SGNH hydrolase-like protein, which while structurally similar to the N-terminal domain of AlgX exhibits a distinctly different electrostatic surface potential. Consistent with other SGNH hydrolases, we identified a conserved catalytic triad composed of D190, H192 and S288 and demonstrated that AlgJ exhibits acetylesterase activity in vitro. Residues in the AlgJ signature motifs were found to form an extensive network of interactions that are critical for O-acetylation of alginate in vivo. Using two different electrospray ionization mass spectrometry (ESI-MS) assays we compared the abilities of AlgJ and AlgX to bind and acetylate alginate. Binding studies using defined length polymannuronic acid revealed that AlgJ exhibits either weak or no detectable polymer binding while AlgX binds polymannuronic acid specifically in a length-dependent manner. Additionally, AlgX was capable of utilizing the surrogate acetyl-donor 4-nitrophenyl acetate to catalyze the O-acetylation of polymannuronic acid. Our results, combined with previously published in vivo data, suggest that the annotated O-acetyltransferases AlgJ and AlgX have separate and distinct roles in O-acetylation. Our refined model for alginate acetylation places AlgX as the terminal acetlytransferase and provides a rationale for the variability in the number of proteins required for polysaccharide O-acetylation.
Marine litter pollution is a global environmental problem. Beach litter is a part of this problem, and is widely monitored in Europe. The European Marine Strategy Framework Directive (MSFD) requires ...a reduction of beach litter. A reduction of 30% has been proposed in the European Plastics Strategy. The aims of this study are to develop (a) a method to calculate sufficiently stable and precise baseline values for beach litter, and (b) to derive a method of power analysis to estimate the number of beach litter surveys, necessary to detect a given reduction, using these baseline values. Beach litter data from the OSPAR (Oslo Paris Convention) region were used, and tailor-made statistical methods were implemented in open source software, litteR. Descriptive statistics and Theil-Sen and Mann-Kendall trend analyses were calculated for the most abundant beach litter types, for 14 survey sites. The length of a baseline period necessary to obtain a specified precision of the mean baseline value, expressed as Coefficient of Variation (CV), was calculated. Power analyses were performed using Monte Carlo simulations combined with Wilcoxon tests to determine significant deviations of the simulated datasets from the baseline mean values. For most survey sites, the mean length of monitoring periods necessary to achieve a CV < 10% amounts to four to five years with four surveys a year. The mean number of surveys necessary to detect a statistically significant reduction of 30% with 80% power ranges from 14 to 20. Power analyses show that a reduction of 10% is difficult to detect, because more than 24 surveys are needed. In contrary, a reduction of 40–50% can be detected easily with a small (<12) number of surveys. The new methods could also be applied to other areas where similar beach litter surveys are performed.
Display omitted
•A method to calculate stable baseline values for beach litter was developed.•A power analyses method for reduction analysis of beach litter was developed.•The methods were applied to time series of 14 OSPAR beaches.•The new statistical methods were implemented in new R software called litteR.•Application of these methods to other beach litter data sets is feasible.
New statistical baseline and power calculation methods for beach litter show that the mean number of years necessary to achieve a stable baseline value with a Coefficient of Variation (CV) < 10% for total abundance and single litter types is in the range of 4.4–4.9 years per beach with four surveys a year.
•Assess usual soil map quality indices.•No single map quality index is useful for all purposes.•Plot various map quality indices in Taylor and solar diagrams.•Taylor and solar diagrams provide better ...insight in map quality than separate indices.
For many decades, soil scientists have produced spatial estimates of soil properties using statistical and non-statistical mapping models. Commonly in soil mapping studies the map quality is assessed through pairwise comparison of observed and predicted values of a soil property, from which statistical indices summarizing the quality of the entire map are computed. Often these indices are based on average error and correlation statistics. In this study, we recommend a more appropriate and effective method of map evaluation by means of Taylor and solar diagrams. Taylor and solar diagrams are summary diagrams exploiting the relationship between statistical indices to visualize differentiable aspects of map quality into a single plot. An important advantage over current map quality evaluation is that map quality can be assessed from the combined effect of a few statistical quantities, not just on the basis of a single index or list of indices. We illustrate the use of common statistical indices and their combination into summary diagrams with a simulation study and two applications on soil data. In the simulation study nine maps with known statistical properties are produced and evaluated with tables and summary diagrams. In the first case study with soil data, change in the quality of a large-scale topsoil organic carbon map is tracked for a number of permutations in the mapping model parameters, whereas in the second case study several maps of topsoil organic carbon content for the same area, made by various statistical and non-statistical models, are compared and evaluated. We consider that in all cases better insights in map quality are obtained with summary diagrams, instead of using a single index or an extensive list of indices. This underpins the importance of using integrated summary graphics to communicate on quantitative map quality so as to avoid excessive trust that a single map quality index may suggest.
In response to the growing societal awareness of the critical role of healthy soils, there has been an increasing demand for accurate and high-resolution soil information to inform national policies ...and support sustainable land management decisions. Despite advancements in digital soil mapping and initiatives like GlobalSoilMap, quantifying soil variability and its uncertainty across space, depth and time remains a challenge. Therefore, maps of key soil properties are often still missing on a national scale, which is also the case in the Netherlands. To meet this challenge and fill this data gap, we introduce BIS-4D, a high-resolution soil modeling and mapping platform for the Netherlands. BIS-4D delivers maps of soil texture (clay, silt and sand content), bulk density, pH, total nitrogen, oxalate-extractable phosphorus, cation exchange capacity and their uncertainties at 25 m resolution between 0 and 2 m depth in 3D space. Additionally, it provides maps of soil organic matter and its uncertainty in 3D space and time between 1953 and 2023 at the same resolution and depth range. The statistical model uses machine learning informed by soil observations amounting to between 3815 and 855 950, depending on the soil property, and 366 environmental covariates. We assess the accuracy of mean and median predictions using design-based statistical inference of a probability sample and location-grouped 10-fold cross validation (CV) and prediction uncertainty using the prediction interval coverage probability.
The objective was to develop an optimal vegetation index (VI
opt
) to predict with a multi-spectral radiometer nitrogen in wheat crop (kgN ha
−1
). Optimality means that nitrogen in the crop can be ...measured accurately in the field during the growing season. It also means that the measurements are stable under changing light conditions and vibrations of the measurement platform. Different fields, on which various nitrogen application rates and seeding densities were applied in experimental plots, were measured optically during the growing season. These measurements were performed over three years. Optical measurements on eight dates were related to calibration measurements of nitrogen in the crop (kgN ha
−1
) as measured in the laboratory. By making combinations of the wavelength bands, and whether or not the soil factor was taken into account, numerous vegetation indices (VIs) were examined for their accuracy in predicting nitrogen in wheat. The effect of changing light conditions in the field and vibrations of the measurement platform on the VIs were determined based on tests in the field. VI
opt
((1+L)*(R
2
NIR
+1)/(R
red
+L) with L = 0.45), the optimal vegetation index found, was best in predicting nitrogen in grain crop. The root mean squared error (RMSE), determined by means of cross-validation, was 16.7 kgN ha
−1
. The RMSE was significantly lower compared to other frequently used VIs such as NDVI, RVI, DVI, and SAVI. The L-value can change between 0.16 and 1.6 without deteriorating the RMSE of prediction. Besides being the best predictor for nitrogen, VI
opt
had the advantage of being a stable vegetation index under circumstances of changing light conditions and platform vibrations. In addition, VI
opt
also had a simple structure of physically meaningful bands.
Question
To assess the acidification process, nationwide information about soil pH on a site level is called for. Measurements of soil pH may be used, however there are not sufficient measurements ...available to map soil pH nationwide on site level. Instead we developed a soil pH map based on vegetation data.
Location
Natural terrestrial areas in The Netherlands.
Methods
271,693 vegetation plots were used to estimate average soil pH per plot with indicator values, based on field measurements, of plant species. By spatial interpolation average pH values between the plots, with the soil type, groundwater table and vegetation management type as ancillary explanatory variables we created a soil pH map. The map covers all terrestrial nature areas (all areas that are not built up areas, agricultural areas and infrastructural areas) in the Netherlands with a map resolution of 25 × 25 m2 raster cells.
Results
The predicted pH of the map varied between 3.0 and 8.6 with standard errors between 0.13 and 0.93. Most of the standard errors range from 0.4 to 0.55, with an average just below 0.5 pH unit. Cross‐validation shows that for 33% the difference between observed and predicted is between −0.1 and 0.1 pH‐unit and for 83% the difference is between −0.5 and 0.5 pH‐unit. Validation shows that the pH map is unbiased (mean error is almost zero), accurate (root mean squared error is 0.64) and nicely captures spatial patterns (r = 0.77). We applied the pH map to assess the impact of acidification on the abiotic quality of nature areas in the Netherlands.
Conclusions
The model fit in the predicted soil pH is in good resulting in a low standard error and a high correlation. The measures taken to prevent acidic deposition causing further acidifying of nature areas can be considered as successful.
Soil pH is one of the determining factors for species occurrence and therefore species can be used to reveal information about the soil pH. For the Netherlands over 200,000 plots are available to estimate soil pH and these estimated pH values were used to make a pH map for Dutch nature areas.
To manage the potential conflict between outdoor recreation and nature conservation, managers of nature areas need information to select effective interventions. For large nature areas information on ...visitor use is often lacking and managers often make decisions based on expert judgement. In this paper we use monitoring data gathered with GPS devices to develop a tool and derive rules of thumb managers can use to estimate the impact of management actions on visitor densities. Using a dataset of 1563 tracks from the New Forest, UK, we developed a random forest model and identified which landscape and environmental features account for the spatial variation in visitor densities. The random forest model shows that distance to car park, distance to roads and openness are the most important factors for predicting visitor densities. The model was used as a tool to assess the impact of potential management interventions on the population of Nightjar. As developing this type of tool requires a lot of data we also derived rules of thumb and a simple algorithm that managers of other nature areas can use to estimate the impact of their interventions on visitor densities. The derived rules of thumb show that changing the location of car parks in relation to tarmac roads can help managers to reduce local visitor densities by 80%. Further research in other nature areas should verify the feasibility of these rules of thumb and the simple algorithm.
•GPS data provides information to understand what drives visitor densities.•Random forest models can be used as tool to assess the impact of interventions.•Current recreational use lowers the Nightjar population by 38% in the New Forest, UK.•Changing the location of car parks in relation to roads is an effective intervention.•Managers might use a simple algorithm for a first estimation of visitor densities.
•A new method to assess benthic fauna conditions in the Southern North Sea region was developed.•This benthic fauna assessment is based on the Margalef diversity index, reference value modelling and ...BENMMI software.•Margelef diversity alone appeared to be the best performing index, better than multi-metric index combinations.•The sensitivity and precision of Margalef diversity were demonstrated for the anthropogenic pressures fishing, organic enrichment, sedimentation and a heavy metal.•This new benthic assessment is potentially suitable for implementation in other European marine regions.
The aims of this study are to develop an optimized method for regional benthic fauna assessment of the Southern North Sea which (a) is sensitive and precise (quantified as the slope and the R2 value of the pressure-impact relationships, respectively) for the anthropogenic pressures bottom fishing and organic enrichment, (b) is suitable for estimating and modelling reference values, (c) is transparent, (d) can be efficiently applied using dedicated software; and to apply this method to benthic data from the Southern North Sea.
Margalef diversity appeared to be the best performing benthic index regarding these criteria, even better than several Multi-Metric Indices (MMIs) containing e.g. AMBI (AZTI Marine Biotic Index) and ITI (Infaunal Trophic Index). Therefore, this relatively simple and very practical index, including a new reference value estimation and modelling method, and BENMMI software were selected as a common OSPAR (Oslo Paris convention) method for the benthic fauna assessment of the Southern North Sea. This method was applied to benthic fauna data from the Southern North Sea collected during the period 2010–2015. The results in general show lower normalized Margalef values in coastal areas, and higher normalized Margalef values in deeper offshore areas.
The following benthic indices were compared in this study: species richness, Margalef diversity, SNA index, Shannon index, PIE index, AMBI, ITI. For each assessment area, the least disturbed benthic dataset was selected as an adjacent 6 year period with, on average, the highest Margalef diversity values. For these datasets, the reference values were primarily set as the 99th percentile values of the respective indices. This procedure results in the highest stable reference values that are not outliers. In addition, a variable percentile method was developed, in which the percentile value is adjusted to the average bottom fishing pressure (according to data from the International Council for the Exploration of the Sea, ICES) in the period 2009–2013. The adjusted percentile values were set by expert judgement, at 75th (low fishing pressure), 95th (medium fishing pressure) and 99th (high fishing pressure) percentile. The estimated reference values for Margalef diversity correlate quite well with the median depth of the assessment areas using a sigmoid model (pseudo-R2 = 0.86). This relationship between depth and Margalef diversity was used to estimate reference values in case an assessment area had insufficient benthic data
For testing the effects of bottom fishing pressure, normalized index values (NIV; index value divided by reference value) were used. The rationale for using NIVs is the assumption that, although a certain level of bottom fishing pressure will have a larger absolute effect on more biodiverse benthic communities in deeper waters than on more robust and less biodiverse coastal benthic communities, the relative effects (tested using NIVs) are comparable. A clear exponentially decreasing relationship (R2 = 0.26–0.27, p < 0.00001) was found between both bottom surface and subsurface fishing activity (penetration depth <2 cm and >2 cm, respectively) and normalized Margalef diversity values, with an asymptotic normalized Margalef value of 0.45 at a subsurface fishing activity >2.3 sweeps/year. This asymptotic value is predominantly found in coastal waters, and probably shows that the naturally more robust coastal benthic communities have been transformed into resilient benthic communities, which rapidly recover from increasing fishing pressure.
Modelling soil erosion sensitivity at continental scale provides a way to compare different countries and to identify those areas that are most seriously threatened. In this research, the MESALES ...model was applied to 3 large areas in Europe and Morocco, using soil data from ESDB and DSMW as well as from the newly developed e-SOTER database. Land use data were derived from the Global Land Cover 2000 database, and slope angle from the HYDRO1K DEM. The aim was to evaluate whether the e-SOTER database resulted in better assessment of soil erosion sensitivity than existing data. To judge this, expert opinion was used. The comparison of results obtained with existing data and with e-SOTER data showed considerable differences. However, it proved impossible to say which results were better. The main reasons for that were that MESALES predicts soil erosion sensitivity, which cannot be measured in the field, and that expert judgement of model results proved inconclusive. Another reason can have been that the e-SOTER database is as yet incomplete. The fact that the application of different soil databases resulted in quite different results does, however, indicate the importance of using the best available data for evaluation of soil threats. However, a current lack of options to validate soil erosion sensitivity estimates was also identified.
•MESALES was applied to three areas in Europe and Morocco to assess erosion sensitivity.•Two databases were compared: legacy database and newly developed e-SOTER database.•Expert evaluation indicated that both databases performed equally well.•Results demonstrate the importance of using the best available input data.•Results indicate a current lack of options to validate soil erosion sensitivity estimates.