Abstract
We compare results for 12 multi-population mortality models fitted to 10 distinct socio-economic groups in England, subdivided using the Index of Multiple Deprivation. Using the Bayes ...Information Criterion to compare models, we find that a special case of the common age effect (CAE) model fits best in a variety of situations, achieving the best balance between goodness of fit and parsimony. We provide a detailed discussion of key models to highlight which features are important. Group-specific period effects are found to be more important than group-specific age effects, and non-parametric age effects deliver significantly better results than parametric (e.g. linear) age effects. We also find that the addition of cohort effects is beneficial in some cases but not all. The preferred CAE model has the additional benefit of being coherent in the sense of Hyndman
et al
. ((2013)
Demography
50
(1), 261–283); some of the other models considered are not.
In this paper, we propose the generalized state-space hedging method for use when the populations associated with the hedging instruments and the liability being hedged are different. In this method, ...the hedging strategy is derived by first reformulating the assumed multi-population stochastic mortality model in a state-space representation, and then considering the sensitivities of the hedge portfolio and the liability being hedged to all relevant hidden states. Inter alia, this method allows us to decompose the underlying longevity risk into components arising solely from the hidden states that are shared by all populations and components stemming exclusively from the hidden states that are population-specific. The latter components collectively represent an explicit measure of the population basis risk involved. Through this measure, we can infer that a portion of population basis risk depends on how the longevity hedge is constructed while another portion exists no matter what the notional amounts of the hedging instruments are. We present the proposed hedging method in both static and dynamic settings.
Forecasting mortality is still a big challenge for Governments that are interested in reliable projections for defining their economic policy at local and national level. The accuracy of mortality ...forecasting is considered an important issue for longevity risk management. In the literature, many authors have analyzed the long-run relationship between mortality evolution and socioeconomic variables, such as economic growth, unemployment rate or educational level. This paper investigates the existence of a link between mortality and real gross domestic product per capita (GDPPC) over time in the Italian regions. Empirical evidence shows the presence of a relationship between mortality and the level of real GDPPC (and not its trend). Therefore, we propose a multi-population model including the level of real GDPPC and we compare it with the Boonen–Li model (Boonen and Li in Demography 54:1921–1946, 2017). The validity of the model is tested in the out-of-sample forecasting experiment.
In this paper we address the problem of projecting mortality when data are severely affected by random fluctuations, due in particular to a small sample size, or when data are scanty. Such situations ...may emerge when dealing with small populations, such as small countries (possibly previously part of a larger country), a specific geographic area of a (large) country, a life annuity portfolio or a pension fund, or when the investigation is restricted to the oldest ages. The critical issues arising from the volatility of data due to the small sample size (especially at the highest ages) may be made worse by missing records; this is the case, for example, of a small country previously part of a larger country, or a specific geographic area of a country, given that in some periods mortality data could have been collected just at an aggregate level.
We suggest to ‘replicate’ the mortality of the small population by mixing appropriately the mortality data obtained from other populations. We design a two-step procedure. First, we obtain the average mortality of ‘neighboring’ populations. Three alternative approaches are tested for the assessment of the average mortality; conversely, the identification and the weight of the neighboring populations are obtained through (standard) optimization techniques. Then, following a sort of credibility approach, we mix the original mortality data of the small population with the average mortality of the neighboring populations.
In principle, the approach described in the paper could be adopted for any population, whatever is its size, aiming at improving mortality projections through information collected from other groups. Through backtesting, we show that the procedure we suggest is convenient for small populations, but not necessarily for large populations, nor for populations not showing noticeable erratic effects in data. This finding can be explained as follows: while the replication of the original data implies the increase of the size of the sample, it also involves a smoothing of data, with a possible loss of specific information relating to the group referred to. In the case of small populations showing major erratic movements in mortality data, the advantages gained from the larger sample size overcome the disadvantages of the smoothing effect.
•We address the projection of mortality when data show major random fluctuations.•The mortality of the small group is ‘replicated’ by mixing data of other groups.•The procedure is successful in face of major erratic movements in mortality data.
Modeling mortality co-movements for multiple populations have significant implications for mortality/longevity risk management. A few two-population mortality models have been proposed to date. They ...are typically based on the assumption that the forecasted mortality experiences of two or more related populations converge in the long run. This assumption might be justified by the long-term mortality co-integration and thus be applicable to longevity risk modeling. However, it seems too strong to model the short-term mortality dependence. In this paper, we propose a two-stage procedure based on the time series analysis and a factor copula approach to model mortality dependence for multiple populations. In the first stage, we filter the mortality dynamics of each population using an ARMA–GARCH process with heavy-tailed innovations. In the second stage, we model the residual risk using a one-factor copula model that is widely applicable to high dimension data and very flexible in terms of model specification. We then illustrate how to use our mortality model and the maximum entropy approach for mortality risk pricing and hedging. Our model generates par spreads that are very close to the actual spreads of the Vita III mortality bond. We also propose a longevity trend bond and demonstrate how to use this bond to hedge residual longevity risk of an insurer with both annuity and life books of business.
A hierarchical credibility model is a generalization of the Bühlmann credibility model and the Bühlmann–Straub credibility model with a tree structure of four or more levels. This paper aims to ...incorporate hierarchical credibility theory, which is used in property and casualty insurance, to model multi-country mortality rates. The forecasting performances of the three/four/five-level hierarchical credibility models are compared with those of the classical Lee–Carter model and its three extensions for multiple populations (the joint- k, the co-integrated, and the augmented common factor Lee–Carter models). Numerical illustrations based on mortality data from the Human Mortality Database for both genders of the US, the UK and Japan with a series of fitting year spans and three forecasting periods show that the hierarchical credibility approach contributes to more accurate forecasts measured by the AMAPE (average of mean absolute percentage errors). Finally, a stochastic version of the proposed hierarchical credibility mortality model is also proposed, which can be used to construct predictive intervals on the projected mortality rates and to conduct stochastic simulations for applications.
This paper constructs a theoretical framework for multi-population mortality modeling via generalized linear models and Lévy stochastic perturbations driven by a common Brownian motion and ...idiosyncratic factors to capture the mortality shocks. By having Lévy stochastic perturbations, our model admits various jump types, which is increasingly important for capturing mortality shocks such as pandemics, particularly when they affect various populations differently. At the same time, the proposed model allows a novel dependence structure of multiple populations, which is essential when it comes to the development of multi-population or joint-life products in the context of mortality shocks. In our empirical investigations, the mortality experiences of male and female lives in the UK and Japan are used. Compared with pure Poisson-generalized linear models, the proposed multi-population model shows superiority in predicting future mortality rates.
We aim to assess the impact of a pandemic data point on the calibration of a stochastic multi-population mortality projection model and its resulting projections for future mortality rates. ...Throughout the paper, we put focus on the Li and Lee mortality model, which has become a standard for projecting mortality in Belgium and the Netherlands. We calibrate this mortality model on annual death counts and exposures at the level of individual ages. This type of mortality data are typically collected, produced and reported with a significant delay of-for some countries-several years on a platform such as the Human Mortality Database. To enable a timely evaluation of the impact of a pandemic data point, we have to rely on other data sources (e.g., the Short-Term Mortality Fluctuations Data series) that swiftly publish weekly mortality data collected in age buckets. To be compliant with the design and calibration strategy of the Li and Lee model, we transform the weekly mortality data collected in age buckets to yearly, age-specific observations. Therefore, our paper constructs a protocol to ungroup the death counts and exposures registered in age buckets to individual ages. To evaluate the impact of a pandemic shock, like COVID-19 in the year 2020, we weigh this data point in either the calibration or projection step. Obviously, the more weight we place on this data point, the more impact we observe on future estimated mortality rates and life expectancies. Our paper allows for quantifying this impact and provides actuaries and actuarial associations with a framework to generate scenarios of future mortality under various assessments of the pandemic data point.
In previous research on pricing mortality-linked securities, the no-arbitrage approach is often used. However, this method, which takes market prices as given, is difficult to implement in today's ...embryonic market where there are few traded securities. In particular, with limited market price data, identifying a risk neutral measure requires strong assumptions. In this thesis, we approach the pricing problem from a different angle by considering economic methods. We propose pricing approaches in both competitive market and non-competitive market.
In the competitive market, we treat the pricing work as a Walrasian tâtonnement process, in which prices are determined through a gradual calibration of supply and demand. Such a pricing framework provides with us a pair of supply and demand curves. From these curves we can tell if there will be any trade between the counterparties, and if there will, at what price the mortality-linked security will be traded. This method does not require the market prices of other mortality-linked securities as input. This can spare us from the problems associated with the lack of market price data.
We extend the pricing framework to incorporate population basis risk, which arises when a pension plan relies on standardized instruments to hedge its longevity risk exposure. This extension allows us to obtain the price and trading quantity of mortality-linked securities in the presence of population basis risk. The resulting supply and demand curves help us understand how population basis risk would affect the behaviors of agents. We apply the method to a hypothetical longevity bond, using real mortality data from different populations. Our illustrations show that, interestingly, population basis risk can affect the price of a mortality-linked security in different directions, depending on the properties of the populations involved.
We have also examined the impact of transitory mortality jumps on trading in a competitive market. Mortality dynamics are subject to jumps, which are due to events such as the Spanish flu in 1918. Such jumps can have a significant impact on prices of mortality-linked securities, and therefore should be taken into account in modeling. Although several single-population mortality models with jump effects have been developed, they are not adequate for trades in which population basis risk exists. We first develop a two-population mortality model with transitory jump effects, and then we use the proposed mortality model to examine how mortality jumps may affect the supply and demand of mortality-linked securities.
Finally, we model the pricing process in a non-competitive market as a bargaining game. Nash's bargaining solution is applied to obtain a unique trading contract. With no requirement of a competitive market, this approach is more appropriate for the current mortality-linked security market. We compare this approach with the other proposed pricing method. It is found that both pricing methods lead to Pareto optimal outcomes.