Lightning climate change projections show large uncertainties caused by limited empirical knowledge and strong assumptions inherent to coarse-grid climate modeling. This study addresses the latter ...issue by implementing and applying the lightning potential index parameterization (LPI) into a fine-grid convection-permitting regional climate model (CPM). This setup takes advantage of the explicit representation of deep convection in CPMs and allows for process-oriented LPI inputs such as vertical velocity within convective cells and coexistence of microphysical hydrometeor types, which are known to contribute to charge separation mechanisms. The LPI output is compared to output from a simpler flash rate parameterization, namely the CAPE
×
PREC parameterization, applied in a non-CPM on a coarser grid. The LPI’s implementation into the regional climate model COSMO-CLM successfully reproduces the observed lightning climatology, including its latitudinal gradient, its daily and hourly probability distributions, and its diurnal and annual cycles. Besides, the simulated temperature dependence of lightning reflects the observed dependency. The LPI outperforms the CAPE
×
PREC parameterization in all applied diagnostics. Based on this satisfactory evaluation, we used the LPI to a climate change projection under the RCP8.5 scenario. For the domain under investigation centered over Germany, the LPI projects a decrease of
4.8
%
in flash rate by the end of the century, in opposition to a projected increase of
17.4
%
as projected using the CAPE
×
PREC parameterization. The future decrease of LPI occurs mostly during the summer afternoons and is related to (i) a change in convection occurrence and (ii) changes in the microphysical mixing. The two parameterizations differ because of different convection occurrences in the CPM and non-CPM and because of changes in the microphysical mixing, which is only represented in the LPI lightning parameterization.
Since radar observations are highly dense in spatial and temporal resolutions, they have been often used to improve short‐term numerical weather prediction (NWP) by means of detailed model ...verification and 3D radar data assimilation. However, the observed quantities are not directly comparable to the prognostic variables of NWP models (e.g. hydrometeor densities, wind vector, temperature, pressure, etc.), so a common approach to facilitate this comparison is to derive synthetic radar observations from model variables; this is the so‐called ‘radar forward operator’. In the present article, a new Efficient Modular VOlume scanning RADar Operator (EMVORADO) for Doppler velocity and reflectivity is introduced. Although it has been developed in the COSMO model framework, it can be also coupled online to any other NWP model. Comprehensive physical aspects of radar measurements (e.g. beam bending/broadening/shielding, Doppler velocity with fall speed and reflectivity weighting, attenuated reflectivity, detectable signal, etc.) have been implemented in a modular way, using state‐of‐the‐art methods with different levels of approximation and numerical costs that can be optionally chosen. The reflectivity derivation from the prognostic model variables is as ‘model consistent’ as possible and carefully honours the uncertainties associated with partially melted particles. Efficiency and applicability on supercomputers (MPI‐parallelism) is a major design criterion, which allows us to simulate entire networks of 3D volume‐scanning meteorological radars within one model run and makes EMVORADO well suited for operational applications. This article aims to give a thorough description of the EMVORADO and to provide a first insight to the performance of different modules by some selected case‐studies.
PARSIVEL Snow Observations: A Critical Assessment Battaglia, Alessandro; Rustemeier, Elke; Tokay, Ali ...
Journal of atmospheric and oceanic technology,
02/2010, Letnik:
27, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The performance of the laser-optical Particle Size Velocity (PARSIVEL) disdrometer is evaluated to determine the characteristics of falling snow. PARSIVEL's measuring principle is reexamined to ...detect its limitations and pitfalls when applied to solid precipitation. This study uses snow observations taken during the Canadian Cloudsat/Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) Validation Project (C3VP) campaign, when two PARSIVEL instruments were collocated with a single two-dimensional disdrometer (2-DVD), which allows more detailed observation of snowflakes. When characterizing the snowflake size, PARSIVEL instruments inherently retrieve only one size parameter, which is approximately equal to the widest horizontal dimension (more accurately with large snowflakes) and that has no microphysical meaning. Unlike for raindrops, the equivolume PARSIVEL diameter-the PARSIVEL output variable-has no physical counterpart for snowflakes. PARSIVEL's fall velocity measurement may not be accurate for a single snowflake particle. This is due to the internally assumed relationship between horizontal and vertical snow particle dimensions. The uncertainty originates from the shape-related factor, which tends to depart more and more from unity with increasing snowflake sizes and can produce large errors. When averaging over a large number of snowflakes, the correction factor is size dependent with a systematic tendency to an underestimation of the fall speed (but never exceeding 20%). Compared to a collocated 2-DVD for long-lasting events, PARSIVEL seems to overestimate the number of small snowflakes and large particles. The disagreement between PARSIVEL and 2-DVD snow measurements can only be partly ascribed to PARSIVEL intrinsic limitations (border effects and sizing problems), but it has to deal with the difficulties and drawbacks of both instruments in fully characterizing snow properties.
Large‐eddy simulations (LES) with the new ICOsahedral Non‐hydrostatic atmosphere model (ICON) covering Germany are evaluated for four days in spring 2013 using observational data from various ...sources. Reference simulations with the established Consortium for Small‐scale Modelling (COSMO) numerical weather prediction model and further standard LES codes are performed and used as a reference. This comprehensive evaluation approach covers multiple parameters and scales, focusing on boundary‐layer variables, clouds and precipitation. The evaluation points to the need to work on parametrizations influencing the surface energy balance, and possibly on ice cloud microphysics. The central purpose for the development and application of ICON in the LES configuration is the use of simulation results to improve the understanding of moist processes, as well as their parametrization in climate models. The evaluation thus aims at building confidence in the model's ability to simulate small‐ to mesoscale variability in turbulence, clouds and precipitation. The results are encouraging: the high‐resolution model matches the observed variability much better at small‐ to mesoscales than the coarser resolved reference model. In its highest grid resolution, the simulated turbulence profiles are realistic and column water vapour matches the observed temporal variability at short time‐scales. Despite being somewhat too large and too frequent, small cumulus clouds are well represented in comparison with satellite data, as is the shape of the cloud size spectrum. Variability of cloud water matches the satellite observations much better in ICON than in the reference model. In this sense, it is concluded that the model is fit for the purpose of using its output for parametrization development, despite the potential to improve further some important aspects of processes that are also parametrized in the high‐resolution model.
Visible images of (top row) MODIS satellite (200 m resolution) and (middle row) synthetic radiances based on simulations with 156 m (625 m in the rightmost column) resolution using the new ICOsahedral Non‐hydrostatic (ICON) model for four simulated days in spring 2013. Bottom row: zoom into North Sea coastal region, 24 April (white dashed box in panel a). This is one of several approaches to evaluate the new ICON model using multiple observations with a focus on clouds and precipitation.
Simulation of radar beam propagation is an important component of numerous radar applications in meteorology, including height assignment, quality control, and especially the so-called radar forward ...operator. Although beam propagation in the atmosphere depends on the refractive index and its vertical variation, which themselves depend on the actual state of the atmosphere, the most common method is to apply the 4/3 earth radius model, based on climatological standard conditions. Serious deviations from the climatological value can occur under so-called ducting conditions, where radar beams at low elevations can be trapped or propagate in a waveguide-like fashion, such that this model is unsuitable in this case. To account for the actual atmospheric conditions, sophisticated methods have been developed in literature. However, concerning the practical implementation of these methods, it was determined that the description in the literature is not always complete with respect to possible pitfalls for practical implementations. In this paper, a revised version of an existing method (one example for the above-mentioned "pitfall" statement) is introduced that exploits Snell's law for spherically stratified media. From Snell's law, the correct sign of the local elevation is a priori ambiguous, and the revised method explicitly applies (i) a total reflection criterion and (ii) another ad hoc criterion to solve the problem. Additionally, a new method, based on an ordinary differential equation with respect to range, is proposed in this paper that has no ambiguity. Sensitivity experiments are conducted to investigate the properties of these three methods. The results show that both the revised and new methods are robust under nonstandard conditions. But considering the need to catch an elevation sign ambiguity in the revised method (which cannot be excluded to fail in rare instances), the new method is regarded as more robust and unproblematic, for example, for applications in radar forward operators.
Assimilation of weather radar measurements including radar reflectivity and radial wind data has been operational at the Deutscher Wetterdienst, with a diagonal observation error (OE) covariance ...matrix. For an implementation of a full OE covariance matrix, the statistics of the OE have to be a priori estimated, for which the Desroziers method has been often used. However, the resulted statistics consists of contributions from different error sources and are difficult to interpret. In this work, we use an approach that is based on samples for truncation error in radar observation space to approximate the representation error due to unresolved scales and processes (RE) and compare its statistics with the OE statistics estimated by the Desroziers method. It is found that the statistics of the RE help the understanding of several important features in the variances and correlation length scales of the OE for both reflectivity and radial wind data and the other error sources from the microphysical scheme, radar observation operator and the superobbing technique may also contribute, for instance, to differences among different elevations and observation types. The statistics presented here can serve as a guideline for selecting which observations are assimilated and for assignment of the OE covariance matrix that can be diagonal or full and correlated.
Radar data assimilation has been operational at the Deutscher Wetterdienst for several years and is essential for generating accurate precipitation forecasts. The current work attempts to further ...enhance the radar data assimilation by improving the latent heat nudging (LHN) scheme and by reducing the observation error (OE) caused by the representation error of the efficient modular volume radar operator (EMVORADO). First of all, a series of hindcasts for a one-month convective period over Germany are performed. Compared with radar reflectivity and satellite observations, it is found that the LHN scheme that implicitly adjusts temperature performs better, and the beam broadening effect and the choice of the scattering schemes in EMVORADO are important. Moreover, the Mie scheme with the new parameterization to reduce the brightband effect not only proves to be the best in hindcasts but also that it results in the smallest standard deviations and the shortest horizontal correlation length scales of the OE in data assimilation experiments.
In the present work, we investigate the impacts on the observation error (OE) statistics due to different types of errors in the forward operator (FE) for both radar reflectivity and radial wind ...data, in the context of convective-scale data assimilation in the summertime. A series of sensitivity experiments were conducted with the Efficient Modular VOlume RADar Operator (EMVORADO), using the operational data assimilation system of the Deutscher Wetterdienst (DWD, German Weather Service). The investigated FEs are versatile, including errors caused by neglecting the terminal fall speed of hydrometeor, the reflectivity weighting, and the beam broadening and attenuation effects, as well as errors caused by different scattering schemes and formulations for melting particles. For reflectivity, it is found that accounting for the beam broadening effect evidently reduces the standard deviations, especially at higher altitudes. However, it does not shorten the horizontal or along-beam correlation length scales. In comparison between the Rayleigh and the Mie schemes (with specific configurations), the former one results in much smaller standard deviations for heights up to 4 km, and aloft, slightly larger standard deviations. Imposing the attenuation to the Mie scheme slightly reduces the standard deviations at lower altitudes; however, it largely increases the standard deviations at higher altitudes and it also leads to longer correlation length scales. For radial wind, positive impacts of considering the beam broadening effect on standard deviations and neutral impacts on correlations are observed. For both reflectivity and radial wind, taking the terminal fall speed of hydrometeor and the reflectivity weighting into account does not make remarkable differences in the estimated OE statistics.
Parametrization of radiation transfer through clouds is an important factor in the ability of Numerical Weather Prediction models to correctly describe the weather evolution. Here we present a ...practical parameterization of both liquid droplets and ice optical properties in the longwave and shortwave radiation. An advanced spectral averaging method is used to calculate the extinction coefficient, single scattering albedo, forward scattered fraction and asymmetry factor (βext, ϖ, f, g), taking into account the nonlinear effects of light attenuation in the spectral averaging. An ensemble of particle size distributions was used for the ice optical properties calculations, which enables the effective size range to be extended up to 570 μm and thus be applicable for larger hydrometeor categories such as snow, graupel, and rain. The new parameterization was applied both in the COSMO limited-area model and in ICON global model and was evaluated by using the COSMO model to simulate stratiform ice and water clouds. Numerical weather prediction models usually determine the asymmetry factor as a function of effective size. For the first time in an operational numerical weather prediction (NWP) model, the asymmetry factor is parametrized as a function of aspect ratio. The method is generalized and is available on-line to be readily applied to any optical properties dataset and spectral intervals of a wide range of radiation transfer models and applications.
Many post‐processing methods improve forecasts at individual locations but remove their correlation structure. However, this information is essential for forecasting larger‐scale events, such as the ...total precipitation amount over areas like river catchments, which are relevant for weather warnings and flood predictions. We propose a method to reintroduce spatial correlation into a post‐processed forecast using an R‐vine copula fitted to historical observations. The method rearranges predictions at individual locations and ensures that they still exhibit the post‐processed marginal distributions. It works similarly to well‐known approaches, like the “Schaake shuffle” and “ensemble copula coupling.” However, compared to these methods, which rely on a ranking with no ties at each considered location in their source for spatial correlation, the copula serves as a measure of how well a given arrangement compares with the observed historical distribution. Therefore, no close relationship is required between the post‐processed marginal distributions and the spatial correlation source. This is advantageous for post‐processed seamless forecasts in two ways. First, meteorological parameters such as the precipitation amount, whose distribution has an atom at zero, have rankings with ties. Second, seamless forecasts represent an optimal combination of their input forecasts and may spatially shifted from them at scales larger than the areas considered herein, leading to non‐reasonable spatial correlation sources for the well‐known methods. Our results indicate that the calibration of the combination model carries over to the output of the proposed model, that is, the evaluation of area predictions shows a similar improvement in forecast quality as the predictions for individual locations. Additionally, the spatial correlation of the forecast is evaluated with the help of object‐based metrics, for which the proposed model also shows an improvement compared to both input forecasts.
Exemplary structure of an R‐vine copula for a five‐dimensional distribution. Each vertex in the bottom tree represents a marginal distribution, whereas the vertices in the second tree represent joint bivariate distributions. In the third and following trees, each vertex represents a joint conditional distribution for two components conditioned on one or more other components. Once fitted to data, these bivariate copulas can generate samples from the fitted copula or obtain functions such as its density. Note that this example was previously presented in Aigner et al. (2023. Optimization and Engineering, 24, 1951–1982).