Yield curve modeling and forecasting Diebold, Francis X; Diebold, Francis X; Rudebusch, Glenn D
2013., 20130115, 2013, 2012-12-26, 20130101
eBook
Understanding the dynamic evolution of the yield curve is critical to many financial tasks, including pricing financial assets and their derivatives, managing financial risk, allocating portfolios, ...structuring fiscal debt, conducting monetary policy, and valuing capital goods. Unfortunately, most yield curve models tend to be theoretically rigorous but empirically disappointing, or empirically successful but theoretically lacking. In this book, Francis Diebold and Glenn Rudebusch propose two extensions of the classic yield curve model of Nelson and Siegel that are both theoretically rigorous and empirically successful. The first extension is the dynamic Nelson-Siegel model (DNS), while the second takes this dynamic version and makes it arbitrage-free (AFNS). Diebold and Rudebusch show how these two models are just slightly different implementations of a single unified approach to dynamic yield curve modeling and forecasting. They emphasize both descriptive and efficient-markets aspects, they pay special attention to the links between the yield curve and macroeconomic fundamentals, and they show why DNS and AFNS are likely to remain of lasting appeal even as alternative arbitrage-free models are developed.
Based on the Econometric and Tinbergen Institutes Lectures,Yield Curve Modeling and Forecastingcontains essential tools with enhanced utility for academics, central banks, governments, and industry.
Bayesian Analysis of DSGE Models An, Sungbae; Schorfheide, Frank
Econometric reviews,
01/2007, Letnik:
26, Številka:
2-4
Journal Article
Recenzirano
Odprti dostop
This paper reviews Bayesian methods that have been developed in recent years to estimate and evaluate dynamic stochastic general equilibrium (DSGE) models. We consider the estimation of linearized ...DSGE models, the evaluation of models based on Bayesian model checking, posterior odds comparisons, and comparisons to vector autoregressions, as well as the non-linear estimation based on a second-order accurate model solution. These methods are applied to data generated from correctly specified and misspecified linearized DSGE models and a DSGE model that was solved with a second-order perturbation method.
Large Bayesian vector auto regressions Bańbura, Marta; Giannone, Domenico; Reichlin, Lucrezia
Journal of applied econometrics (Chichester, England),
01/2010, Letnik:
25, Številka:
1
Journal Article
Recenzirano
This paper shows that vector auto regression (VAR) with Bayesian shrinkage is an appropriate tool for large dynamic models. We build on the results of De Mol and co-workers (2008) and show that, when ...the degree of shrinkage is set in relation to the cross-sectional dimension, the forecasting performance of small monetary VARs can be improved by adding additional macroeconomic variables and sectoral information. In addition, we show that large VARs with shrinkage produce credible impulse responses and are suitable for structural analysis.
This book provides a self-contained, linear, and unified introduction to empirical processes and semiparametric inference. These powerful research techniques are surprisingly useful for developing ...methods of statistical inference for complex models and in understanding the properties of such methods. The targeted audience includes statisticians, biostatisticians, and other researchers with a background in mathematical statistics who have an interest in learning about and doing research in empirical processes and semiparametric inference but who would like to have a friendly and gradual introduction to the area. The book can be used either as a research reference or as a textbook. The level of the book is suitable for a second year graduate course in statistics or biostatistics, provided the students have had a year of graduate level mathematical statistics and a semester of probability. The book consists of three parts. The first part is a concise overview of all of the main concepts covered in the book with a minimum of technicalities. The second and third parts cover the two respective main topics of empirical processes and semiparametric inference in depth. The connections between these two topics is also demonstrated and emphasized throughout the text. Each part has a final chapter with several case studies that use concrete examples to illustrate the concepts developed so far. The last two parts also each include a chapter which covers the needed mathematical preliminaries. Each main idea is introduced with a non-technical motivation, and examples are given throughout to illustrate important concepts. Homework problems are also included at the end of each chapter to help the reader gain additional insights.
There are several models for magnetic hysteresis. Their key purposes are to model magnetization curves with a history dependence to achieve hysteresis cycles without a frequency dependence. There are ...different approaches to handling history dependence. The two main categories are Duhem-type models and Preisach-type models. Duhem models handle it via a simple directional dependence on the flux rate, without a proper memory. While the Preisach type model handles it via memory of the point where the direction of the flux rate is changed. The most common Duhem model is the phenomenological Jiles–Atherton model, with examples of other models including the Coleman–Hodgdon model and the Tellinen model. Examples of Preisach type models are the classical Preisach model and the Prandtl–Ishlinskii model, although there are also many other models with adoptions of a similar history dependence. Hysteresis is by definition rate-independent, and thereby not dependent on the speed of the alternating flux density. An additional rate dependence is still important and often included in many dynamic hysteresis models. The Chua model is common for modeling non-linear dynamic magnetization curves; however, it does not define classical hysteresis. Other similar adoptions also exist that combine hysteresis modeling with eddy current modeling, similar to how frequency dependence is included in core loss modeling. Most models are made for scalar values of alternating fields, but there are also several models with vector generalizations that also consider three-dimensional directions.
Regional scientists frequently work with regression relationships involving sample data that is spatial in nature. For example, hedonic house-price regressions relate selling prices of houses located ...at points in space to characteristics of the homes as well as neighborhood characteristics. Migration, commodity, and transportation flow models relate the size of flows between origin and destination regions to the distance between origin and destination as well as characteristics of both origin and destination regions. Regional growth regressions relate growth rates of a region to past period own- and nearby-region resource inputs used in production. Spatial data typically violates the assumption that each observation is independent of other observations made by ordinary regression methods. This has econometric implications for the quality of estimates and inferences drawn from nonspatial regression models. Alternative methods for producing point estimates and drawing inferences for relationships involving spatial data samples comprise the broad topic covered by spatial econometrics. Like any subdiscipline, spatial econometrics has its quirks, many of which reflect influential past literature that has gained attention in both theoretical and applied work. This article asks the question: “What should regional scientists who wish to use regression relationships involving spatial data in an effort to shed light on questions of interest in regional science know about spatial econometric methods?”
Over the past several decades, a wide range of complex structures or phenomena of interest to geologists and geochemists has been quantitatively characterized using fractal/multifractal theory and ...models. With respect to the application of fractal/multifractal models to geochemical data, the focus has been on how to decompose geochemical populations or quantify the spatial distribution of geochemical data. A variety of fractal/multifractal models for this purpose have been proposed on the basis of the scaling characteristics of geochemical data. These include the concentration–area (C-A) fractal model, concentration–distance (C-D) fractal model, spectrum–area (S-A) multifractal model, multifractal singularity analysis, and the concentration–volume (C-V) fractal model. These fractal models have been widely demonstrated to be useful, as indicated by the increasing number of published papers. In this study, fractal/multifractal modeling of geochemical data including its theory, the way it works, its benefits and limitations, its applications, and the relationships between these models are reviewed. The comparison among of C-A, S-A, and multifractal singularity analysis based on simulated data suggested that mapping singularity technique can enhance and identify weak anomalies caused by buried sources. Future study should focus on how to distinguish the true anomalies associated to mineralization with the false anomalies from a fractal/multifractal perspective.
•Fractal/multifractal modelling of geochemical data is reviewed.•The C-A, S-A and singularity index are compared based on simulated data.•Singularity mapping technique can well detect weak geochemical anomalies.
The paper provides a review of the estimation of structural vector autoregressions with sign restrictions. It is shown how sign restrictions solve the parametric identification problem present in ...structural systems but leaves the model identification problem unresolved. A market and a macro model are used to illustrate these points. Suggestions have been made on how to find a unique model. These are reviewed. An analysis is provided of whether one can recover the true impulse responses and what difficulties might arise when one wishes to use the impulse responses found with sign restrictions.
In large-scale panel data models with latent factors the number of factors and their loadings may change over time. Treating the break date as unknown, this article proposes an adaptive group-LASSO ...estimator that consistently determines the numbers of pre- and post-break factors and the stability of factor loadings if the number of factors is constant. We develop a cross-validation procedure to fine-tune the data-dependent LASSO penalties and show that after the number of factors has been determined, a conventional least-squares approach can be used to estimate the break date consistently. The method performs well in Monte Carlo simulations. In an empirical application, we study the change in factor loadings and the emergence of new factors in a panel of U.S. macroeconomic and financial time series during the Great Recession.
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, with the intention of offering an up-to-date account of theory and methods in ...analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection and jackknife method in the context of mixed models.The book is aimed at students, researchers and other practitioners who are interested in using mixed models for statistical data analysis. The book is suitable for a course in a M.S. program in statistics, provided that the section of further results and technical notes in each of the first four chapters is skipped. If these four sections are included, the book may be used for a course in a Ph. D. program in statistics. A first course in mathematical statistics, the ability to use computer for data analysis and familiarity with calculus and linear algebra are prerequisites. Additional statistical courses such as regression analysis and a good knowledge about matrices would be helpful.