We review 15 techniques for estimating missing values of net ecosystem CO
2 exchange (NEE) in eddy covariance time series and evaluate their performance for different artificial gap scenarios based ...on a set of 10 benchmark datasets from six forested sites in Europe.
The goal of gap filling is the reproduction of the NEE time series and hence this present work focuses on estimating missing NEE values, not on editing or the removal of suspect values in these time series due to systematic errors in the measurements (e.g., nighttime flux, advection). The gap filling was examined by generating 50 secondary datasets with artificial gaps (ranging in length from single half-hours to 12 consecutive days) for each benchmark dataset and evaluating the performance with a variety of statistical metrics. The performance of the gap filling varied among sites and depended on the level of aggregation (native half-hourly time step versus daily), long gaps were more difficult to fill than short gaps, and differences among the techniques were more pronounced during the day than at night.
The non-linear regression techniques (NLRs), the look-up table (LUT), marginal distribution sampling (MDS), and the semi-parametric model (SPM) generally showed good overall performance. The artificial neural network based techniques (ANNs) were generally, if only slightly, superior to the other techniques. The simple interpolation technique of mean diurnal variation (MDV) showed a moderate but consistent performance. Several sophisticated techniques, the dual unscented Kalman filter (UKF), the multiple imputation method (MIM), the terrestrial biosphere model (BETHY), but also one of the ANNs and one of the NLRs showed high biases which resulted in a low reliability of the annual sums, indicating that additional development might be needed. An uncertainty analysis comparing the estimated random error in the 10 benchmark datasets with the artificial gap residuals suggested that the techniques are already at or very close to the noise limit of the measurements. Based on the techniques and site data examined here, the effect of gap filling on the annual sums of NEE is modest, with most techniques falling within a range of ±25
g
C
m
−2
year
−1.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK, ZRSKP
•We synthesize the measurement and dynamics of dead wood carbon and decomposition.•Many protocols exist for inventorying standing dead trees and downed woody debris.•Research needs are presented to ...promote the accurate quantification of dead wood.•Issues presented here are hindered by unknowns of future global change scenarios.
The amount and dynamics of forest dead wood (both standing and downed) has been quantified by a variety of approaches throughout the forest science and ecology literature. Differences in the sampling and quantification of dead wood can lead to differences in our understanding of forests and their role in the sequestration and emissions of CO2, as well as in developing appropriate strategies for achieving dead wood-related objectives, including biodiversity protection, and procurement of forest bioenergy feedstocks. A thorough understanding of the various methods available for quantifying dead wood stores and decomposition is critical for comparing studies and drawing valid conclusions. General assessments of forest dead wood are conducted by numerous countries as a part of their national forest inventories, while detailed experiments that employ field-based and modeling methods to understand woody debris patterns and processes have greatly advanced our understanding of dead wood dynamics. We review methods for quantifying dead wood in forest ecosystems, with an emphasis on biomass and carbon attributes. These methods encompass various sampling protocols for inventorying standing dead trees and downed woody debris, and an assortment of approaches for forecasting wood decomposition through time. Recent research has provided insight on dead wood attributes related to biomass and carbon content, through the use of structural reduction factors and robust modeling approaches, both of which have improved our understanding of dead wood dynamics. Our review, while emphasizing temperate forests, identifies key research needs and knowledge which at present impede our ability to accurately characterize dead wood populations. In sum, we synthesize the current literature on the measurement and dynamics of forest dead wood carbon stores and decomposition as a baseline for researchers and natural resource managers concerned about forest dead wood patterns and processes.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK
The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many ...considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of precision is always among the most important criteria. Similarly, in designing new sampling methods, one always seeks to decrease the variance of the new method compared to existing methods. This paper provides a review of some graphical methods that may prove useful in these endeavors. In addition, in the case of the comparison of variances between new and existing methods, it introduces the use of wavelet filtering to decompose the sampling variance associated with the estimators under consideration into scale-based components of variance. This yields an analysis of variance of sorts regarding how the methods compare over different distance/area classes. The graphical tools are also shown to be applicable to the wavelet decomposition. These graphical tools may prove useful in summarizing the results for inventory design, while the wavelet results may prove helpful as we begin to look at sampling designs more in light of spatial processes for a given population of trees or downed coarse woody debris.
The fundamental validity of the self-thinning "law" has been debated over the last three decades. A long-standing concern centers on how to objectively select data points for fitting the ...self-thinning line and the most appropriate regression method for estimating the two coefficients. Using data from an even-aged Pinus strobus L. stand as an example, we show that quantile regression (QR), deterministic frontier function (DFF), and stochastic frontier function (SFF) methods have the potential to produce an upper limiting boundary line above all plots for the maximum size-density relationship, without subjectively selecting a subset of data points based on predefined criteria. On the other hand, ordinary least squares (OLS), corrected ordinary least squares (COLS), and reduced major axis (RMA) methods are sensitive to the data selected for model fitting and may produce self-thinning lines with inappropriate slopes. However, statistical inference is very difficult with the DFF and QR methods. Although SFF produces a self-thinning line lower than the upper limiting boundary line because of the nature of the method, it can easily produce the statistics for inference on the model coefficients, given that there are no significant departures from underlying distributional assumptions.
Full text
Available for:
BF, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and ...states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO
2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110
gC
m
−2
year
−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK, ZRSKP
Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area ...factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur quite high on the stem, making them difficult to view from the sample point. To surmount this difficulty, use of the âantithetic variateâ associated with the critical height together with importance sampling from the cylindrical shells integral is proposed. This antithetic variate will be u = (1 â b/B), where b is the cross-sectional area at âborderlineâ condition and B is the treeâs basal area. The cross-sectional area at borderline condition b can be determined with knowledge of the HPS gauge angle by measuring the distance to the sample tree. When the antithetic variate u is used in importance sampling, the upper-stem measurement will be low on tree stems close to the sample point and high on tree stems distant from the sample point, enhancing visibility and ease of measurement from the sample point. Computer simulations compared HPS, CHS, CHS with importance sampling (ICHS), ICHS and an antithetic variate (AICHS), and CHS with paired antithetic varariates (PAICHS) and found that HPS, ICHS, AICHS, and PAICHS were very nearly equally precise and were more precise than CHS. These results are favorable to AICHS, since it should require less time than either PAICHS or ICHS and is not subject to individual-tree volume equation bias.
Full text
Available for:
BF, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
The typical âdouble countingâ application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo ...sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical height measured from the original sample point. A generalization of the mirage method is proposed for CHS and related techniques in which new samples of critical heights or other dimensions are obtained from mirage points outside the tract boundary. This is necessary because, in the case of CHS, the critical height actually depends on the distance between the tree and a randomly located sample point. Other spatially referenced individual tree attribute or coarse woody debris (CWD) estimators that use Monte Carlo integration with importance sampling have been developed in which the tree or CWD attribute estimate also depends on the distance between the tree and the sample point. The proposed modified mirage method is shown to be design unbiased. The proof includes general application to Monte Carlo integration estimators for objects such as CWD sampled from points.
Full text
Available for:
BF, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Background
A new variance estimator is derived and tested for big BAF (Basal Area Factor) sampling which is a forest inventory system that utilizes Bitterlich sampling (point sampling) with two BAF ...sizes, a small BAF for tree counts and a larger BAF on which tree measurements are made usually including DBHs and heights needed for volume estimation.
Methods
The new estimator is derived using the Delta method from an existing formulation of the big BAF estimator as consisting of three sample means. The new formula is compared to existing big BAF estimators including a popular estimator based on Bruce’s formula.
Results
Several computer simulation studies were conducted comparing the new variance estimator to all known variance estimators for big BAF currently in the forest inventory literature. In simulations the new estimator performed well and comparably to existing variance formulas.
Conclusions
A possible advantage of the new estimator is that it does not require the assumption of negligible correlation between basal area counts on the small BAF factor and volume-basal area ratios based on the large BAF factor selection trees, an assumption required by all previous big BAF variance estimation formulas. Although this correlation was negligible on the simulation stands used in this study, it is conceivable that the correlation could be significant in some forest types, such as those in which the DBH-height relationship can be affected substantially by density perhaps through competition. We derived a formula that can be used to estimate the covariance between estimates of mean basal area and the ratio of estimates of mean volume and mean basal area. We also mathematically derived expressions for bias in the big BAF estimator that can be used to show the bias approaches zero in large samples on the order of
1
n
where
n
is the number of sample points.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
In recent years alternative modeling techniques have been used to account for spatial autocorrelations among data observations. They include linear mixed model (LMM), generalized additive model ...(GAM), multi-layer perceptron (MLP) neural network, radial basis function (RBF) neural network, and geographically weighted regression (GWR). Previous studies show these models are robust to the violation of model assumptions and flexible to nonlinear relationships among variables. However, many of them are non-spatial in nature. In this study, we utilize a local spatial analysis method (i.e., local Moran coefficient) to investigate spatial distribution and heterogeneity in model residuals from those modeling techniques with ordinary least-squares (OLS) as the benchmark. The regression model used in this study has tree crown area as the response variable, and tree diameter and the coordinates of tree locations as the predictor variables. The results indicate that LMM, GAM, MLP and RBF may improve model fitting to the data and provide better predictions for the response variable, but they generate spatial patterns for model residuals similar to OLS. The OLS, LMM, GAM, MLP and RBF models yield more residual clusters of similar values, indicating that trees in some sub-areas are either all underestimated or all overestimated for the response variable. In contrast, GWR estimates model coefficients at each location in the study area, and produces more accurate predictions for the response variable. Furthermore, the residuals of the GWR model have more desirable spatial distributions than the ones derived from the OLS, LMM, GAM, MLP and RBF models.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK
Background
The Chapman-Richards distribution is developed as a special case of the equilibrium solution to the McKendrick-Von Foerster equation. The Chapman-Richards distribution incorporates the ...vital rate assumptions of the Chapman-Richards growth function, constant mortality and recruitment into the mathematical form of the distribution. Therefore, unlike ‘assumed’ distribution models, it is intrinsically linked with the underlying vital rates for the forest area under consideration.
Methods
It is shown that the Chapman-Richards distribution can be recast as a subset of the generalized beta distribution of the first kind, a rich family of assumed probability distribution models with known properties. These known properties for the generalized beta are then immediately available for the Chapman-Richards distribution, such as the form of the compatible basal area-size distribution. A simple two-stage procedure is proposed for the estimation of the model parameters and simulation experiments are conducted to validate the procedure for four different possible distribution shapes.
Results
The simulations explore the efficacy of the two-stage estimation procedure; these cover the estimation of the growth equation and mortality—recruitment derives from the equilibrium assumption. The parameter estimates are shown to depend on both the sample size and the amount of noise imparted to the synthetic measurements. The results vary somewhat by distribution shape, with the smaller, noisier samples providing less reliable estimates of the vital rates and final distribution forms.
Conclusions
The Chapman-Richards distribution in its original form, or recast as a generalized beta form, presents a potentially useful model integrating vital rates and stand diameters into a flexible family of resultant distributions shapes. The data requirements are modest, and parameter estimation is straightforward provided the minimal recommended sample sizes are obtained.