Quantile mapping bias correction algorithms are commonly used to correct systematic distributional biases in precipitation outputs from climate models. Although they are effective at removing ...historical biases relative to observations, it has been found that quantile mapping can artificially corrupt future model-projected trends. Previous studies on the modification of precipitation trends by quantile mapping have focused on mean quantities, with less attention paid to extremes. This article investigates the extent to which quantile mapping algorithms modify global climate model (GCM) trends in mean precipitation and precipitation extremes indices. First, a bias correction algorithm, quantile delta mapping (QDM), that explicitly preserves relative changes in precipitation quantiles is presented. QDM is compared on synthetic data with detrended quantile mapping (DQM), which is designed to preserve trends in the mean, and with standard quantile mapping (QM). Next, methods are applied to phase 5 of the Coupled Model Intercomparison Project (CMIP5) daily precipitation projections over Canada. Performance is assessed based on precipitation extremes indices and results from a generalized extreme value analysis applied to annual precipitation maxima. QM can inflate the magnitude of relative trends in precipitation extremes with respect to the raw GCM, often substantially, as compared to DQM and especially QDM. The degree of corruption in the GCM trends by QM is particularly large for changes in long period return values. By the 2080s, relative changes in excess of +500% with respect to historical conditions are noted at some locations for 20-yr return values, with maximum changes by DQM and QDM nearing +240% and +140%, respectively, whereas raw GCM changes are never projected to exceed +120%.
Five statistical downscaling methods automated regression-based statistical downscaling (ASD), bias correction spatial disaggregation (BCSD), quantile regression neural networks (QRNN), TreeGen (TG), ...and expanded downscaling (XDS) are compared with respect to representing climatic extremes. The tests are conducted at six stations from the coastal, mountainous, and taiga region of British Columbia, Canada, whose climatic extremes are measured using the 27 Climate Indices of Extremes (ClimDEX; http://www.climdex.org/climdex/index.action) indices. All methods are calibrated from data prior to 1991, and tested against the two decades from 1991 to 2010. A three-step testing procedure is used to establish a given method as reliable for any given index. The first step analyzes the sensitivity of a method to actual index anomalies by correlating observed and NCEP-downscaled annual index values; then, whether thedistributionof an index corresponds to observations is tested. Finally, this latter test is applied to a downscaled climate simulation. This gives a total of 486 single and 162 combined tests. The temperature-related indices pass about twice as many tests as the precipitation indices, and temporally more complex indices that involve consecutive days pass none of the combined tests. With respect to regions, there is some tendency of better performance at the coastal and mountaintop stations. With respect to methods, XDS performed best, on average, with 19% (48%) of passed combined (single) tests, followed by BCSD and QRNN with 10% (45%) and 10% (31%), respectively, ASD with 6% (23%), and TG with 4% (21%) of passed tests. Limitations of the testing approach and possible consequences for the downscaling of extremes in these regions are discussed.
Downscaling Extremes Bürger, G.; Sobie, S. R.; Cannon, A. J. ...
Journal of climate,
05/2013, Letnik:
26, Številka:
10
Journal Article
Recenzirano
Odprti dostop
This study follows up on a previous downscaling intercomparison for present climate. Using a larger set of eight methods the authors downscale atmospheric fields representing present (1981–2000) and ...future (2046–65) conditions, as simulated by six global climate models following three emission scenarios. Local extremes were studied at 20 locations in British Columbia as measured by the same set of 27 indices, ClimDEX, as in the precursor study. Present and future simulations give 2 × 3 × 6 × 8 × 20 × 27 = 155 520 index climatologies whose analysis in terms of mean change and variation is the purpose of this study. The mean change generally reinforces what is to be expected in a warmer climate: that extreme cold events become less frequent and extreme warm events become more frequent, and that there are signs of more frequent precipitation extremes. There is considerable variation, however, about this tendency, caused by the influence of scenario, climate model, downscaling method, and location. This is analyzed using standard statistical techniques such as analysis of variance and multidimensional scaling, along with an assessment of the influence of each modeling component on the overall variation of the simulated change. It is found that downscaling generally has the strongest influence, followed by climate model; location and scenario have only a minor influence. The influence of downscaling could be traced back in part to various issues related to the methods, such as the quality of simulated variability or the dependence on predictors. Using only methods validated in the precursor study considerably reduced the influence of downscaling, underpinning the general need for method verification.
Landslide hazards in British Columbia are mainly caused by precipitation and can result in significant damage and fatalities. Anthropogenic climate change is expected to increase precipitation ...frequency and intensity in the winter, spring, and fall in British Columbia (BC), potentially resulting in increased frequency of landslide hazard. Quantifying the effect of changing precipitation on future landslide hazard across the varying topographic and climatic conditions in BC requires detailed projections of future precipitation. Here, the operational Landslide Hazard Assessment for Situational Awareness (LHASA) model is used with high-resolution, statistically downscaled daily precipitation to generate detailed simulations of landslide hazard in BC over the twenty-first century. Historical evaluation of the LHASA model is performed using a station-based, gridded observational precipitation dataset. Classification of observed landslide dates and locations as hazard events occurs as successfully as, or slightly better than, when LHASA is applied globally with satellite precipitation. Using the LHASA model with precipitation projections from 12 downscaled global climate models following RCP8.5 indicates that future landslide hazard frequency will increase from 16 days per year to 21 days per year (32
%
) on average by the 2050s for landslide susceptible regions in the province. Areas of the province currently with the most frequent landslide hazards (18 to 21 days per year), including the west coast and northern Rocky Mountains, are expected to see between 8 and 11 additional hazardous days (49 to 61
%
increases) per year. Most of the increased hazard frequency occurs during winter and fall, reflecting those seasons with the largest projected increases in single and multi-day precipitation. Risk assessments for regions in British Columbia vulnerable to landslides will need to account for increasing hazard due to climate change altered precipitation.
Utilizing clouds for Belle II Sobie, R.J.
Journal of physics. Conference series,
12/2015, Letnik:
664, Številka:
2
Journal Article
Recenzirano
Odprti dostop
This paper describes the use of cloud computing resources for the Belle II experiment. A number of different methods are used to exploit the private and opportunistic clouds. Clouds are making ...significant contributions to the generation of Belle II MC data samples and it is expected that their impact will continue to grow over the coming years.
The use of opportunistic cloud resources by HEP experiments has significantly increased over the past few years. Clouds that are owned or managed by the HEP community are connected to the LHCONE ...network or the research network with global access to HEP computing resources. Private clouds, such as those supported by non-HEP research funds are generally connected to the international research network; however, commercial clouds are either not connected to the research network or only connect to research sites within their national boundaries. Since research network connectivity is a requirement for HEP applications, we need to find a solution that provides a high-speed connection. We are studying a solution with a virtual router that will address the use case when a commercial cloud has research network connectivity in a limited region. In this situation, we host a virtual router in our HEP site and require that all traffic from the commercial site transit through the virtual router. Although this may increase the network path and also the load on the HEP site, it is a workable solution that would enable the use of the remote cloud for low I/O applications. We are exploring some simple open-source solutions. In this paper, we present the results of our studies and how it will benefit our use of private and public clouds for HEP computing.
The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few ...years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.
Recent studies have identified stronger warming in the latest generation of climate model simulations globally, and the same is true for projected changes in Canada. This study examines differences ...for Canada and six sub-regions between simulations from the latest Sixth Coupled Model Intercomparison Project (CMIP6) and its predecessor CMIP5. Ensembles from both experiments are assessed using a set of derived indices calculated from daily precipitation and temperature, with projections compared at fixed future time intervals and fixed levels of global temperature change. For changes calculated at fixed time intervals most temperature indices display higher projected changes in CMIP6 than CMIP5 for most sub-regions, while greater precipitation changes in CMIP6 occur mainly in extreme precipitation indices. When future projections are calculated at fixed levels of global average temperature increase, the size and spread of differences for future projected changes between CMIP6 and CMIP5 are substantially reduced for most indices. Temperature scaling behaviour, or the regional response to increasing global temperatures, is similar in both ensembles, with annual temperature anomalies for Canada and its sub-regions increasing at between 1.5 and 2.5 times the rate of increase globally, depending on the region. The CMIP6 ensemble projections exhibit modestly stronger scaling behaviour for temperature anomalies in northern Canada, as well as for certain indices of moderate and extreme events. Such temperature scaling differences persist even if anomalously warm CMIP6 global climate models are omitted. Comparing the mean and variance of future projections for Canada in CMIP5 and CMIP6 simulations from the same modelling centre suggests CMIP6 models are significantly warmer in Canada than CMIP5 models at the same level of forcing, with some evidence that internal temperature variability in CMIP6 is reduced compared with CMIP5.
Input data for applications that run in cloud computing centres can be stored at distant repositories, often with multiple copies of the popular data stored at many sites. Locating and retrieving the ...remote data can be challenging, and we believe that federating the storage can address this problem. A federation would locate the closest copy of the data on the basis of GeoIP information. Currently we are using the dynamic data federation Dynafed, a software solution developed by CERN IT. Dynafed supports several industry standards for connection protocols like Amazon's S3, Microsoft's Azure, as well as WebDAV and HTTP. Dynafed functions as an abstraction layer under which protocol-dependent authentication details are hidden from the user, requiring the user to only provide an X509 certificate. We have setup an instance of Dynafed and integrated it into the ATLAS data distribution management system. We report on the challenges faced during the installation and integration. We have tested ATLAS analysis jobs submitted by the PanDA production system and we report on our first experiences with its operation.
Knowledge from high-resolution daily climatological parameters is frequently sought after for increasingly local climate change assessments. This research investigates whether applying a simple ...postprocessing methodology to existing statistically downscaled temperature and precipitation fields can result in improved downscaled simulations useful at the local scale. Initial downscaled daily simulations of temperature and precipitation at 10-km resolution are produced using bias correction constructed analogs with quantile mapping (BCCAQ). Higher-resolution (800 m) values are then generated using the simpler climate imprint technique in conjunction with temperature and precipitation climatologies from the Parameter-Elevation Regression on Independent Slopes Model (PRISM). The potential benefit of additional downscaling to 800m is evaluated using the “Climdex” set of 27 indices of extremes established by the Expert Team on Climate Change Detection and Indices (ETCCDI). These indices are also calculated from weather station observations recorded at 22 locations within southwestern British Columbia, Canada, to evaluate the performance of both the 10-km and 800-m datasets in replicating the observed quantities. In a 30-yr historical evaluation period, Climdex indices computed from 800-m simulated values display reduced error relative to local station observations than those from the 10-km dataset, with the greatest reduction in error occurring at highelevation sites for precipitation-based indices.