We develop and exemplify application of new classes of dynamic models for time series of nonnegative counts. Our novel univariate models combine dynamic generalized linear models for binary and ...conditionally Poisson time series, with dynamic random effects for over-dispersion. These models estimate dynamic regression coefficients in both binary and nonzero count components. Sequential Bayesian analysis allows fast, parallel analysis of sets of decoupled time series. New multivariate models then enable information sharing in contexts when data at a more highly aggregated level provide more incisive inferences on shared patterns such as trends and seasonality. A novel multiscale approach-one new example of the concept of decouple/recouple in time series-enables information sharing across series. This incorporates cross-series linkages while insulating parallel estimation of univariate models, and hence enables scalability in the number of series. The major motivating context is supermarket sales forecasting. Detailed examples drawn from a case study in multistep forecasting of sales of a number of related items showcase forecasting of multiple series, with discussion of forecast accuracy metrics, comparisons with existing methods, and broader questions of probabilistic forecast assessment.
Carbon emissions reached an all-time high in 2018, when global carbon dioxide emissions from burning fossil fuels increased by about 2.7%, after a 1.6% increase in 2017. Thus, we need to pay special ...attention to carbon emissions and work out possible solutions if we still want to meet the targets of the Paris climate agreement. This Special Issue collects 16 carbon emissions-related papers (including 5 that are carbon tax-related) and 4 energy-related papers using various methods or models, such as the input–output model, decoupling analysis, life cycle impact analysis (LCIA), relational analysis model, generalized Divisia index model (GDIM), forecasting model, three-indicator allocation model, mathematical programming, real options model, multiple linear regression, etc. The research studies come from China, Taiwan, Brazil, Thailand, and United States. These researches involved various industries such as agricultural industry, transportation industry, power industry, tire industry, textile industry, wave energy industry, natural gas industry, and petroleum industry. Although this Special Issue does not fully solve our concerns, it still provides abundant material for implementing energy conservation and carbon emissions reduction. However, there are still many issues regarding the problems caused by global warming that require research.
This paper examines several extensions of the stochastic frontier that account for unmeasured heterogeneity as well as firm inefficiency. The fixed effects model is extended to the stochastic ...frontier model using results that specifically employ its nonlinear specification. Based on Monte Carlo results, we find that the incidental parameters problem operates on the coefficient estimates in the fixed effects stochastic frontier model in ways that are somewhat at odds with other familiar results. We consider a special case of the random parameters model that produces a random effects model that preserves the central feature of the stochastic frontier model and accommodates heterogeneity. We then examine random parameters and latent class models. In these cases, explicit models for firm heterogeneity are built into the stochastic frontier. Comparisons with received results for these models are presented in an application to the U.S. banking industry.
This article considers the problem of inference in observational studies with time-varying adoption of treatment. In addition to an unconfoundedness assumption that the potential outcomes are ...independent of the times at which units adopt treatment conditional on the units' observed characteristics, our analysis assumes that the time at which each unit adopts treatment follows a Cox proportional hazards model. This assumption permits the time at which each unit adopts treatment to depend on the observed characteristics of the unit, but imposes the restriction that the probability of multiple units adopting treatment at the same time is zero. In this context, we study randomization tests of a null hypothesis that specifies that there is no treatment effect for all units and all time periods in a distributional sense. We first show that an infeasible test that treats the parameters of the Cox model as known has rejection probability under the null hypothesis no greater than the nominal level in finite samples. Since these parameters are unknown in practice, this result motivates a feasible test that replaces these parameters with consistent estimators. While the resulting test does not need to have the same finite-sample validity as the infeasible test, we show that it has limiting rejection probability under the null hypothesis no greater than the nominal level. In a simulation study, we examine the practical relevance of our theoretical results, including robustness to misspecification of the model for the time at which each unit adopts treatment. Finally, we provide an empirical application of our methodology using the synthetic control-based test statistic and tobacco legislation data found in Abadie, Diamond and Hainmueller.
Supplementary materials
for this article are available online.
Matlab Software for Spatial Panels Elhorst, J. Paul
International regional science review,
07/2014, Letnik:
37, Številka:
3
Journal Article
Recenzirano
Odprti dostop
Elhorst provides Matlab routines to estimate spatial panel data models at his website. This article extends these routines to include the bias correction procedure proposed by Lee and Yu if the ...spatial panel data model contains spatial and/or time-period fixed effects, the direct and indirect effects estimates of the explanatory variables proposed by LeSage and Pace, and a selection framework to determine which spatial panel data model best describes the data. To demonstrate these routines in an empirical setting, a demand model for cigarettes is estimated based on panel data from forty-six US states over the period 1963–1992.
Abstract
Inferring reciprocal effects or causality between variables is a central aim of behavioral and psychological research. To address reciprocal effects, a variety of longitudinal models that ...include cross-lagged relations have been proposed in different contexts and disciplines. However, the relations between these cross-lagged models have not been systematically discussed in the literature. This lack of insight makes it difficult for researchers to select an appropriate model when analyzing longitudinal data, and some researchers do not even think about alternative cross-lagged models. The present research provides a unified framework that clarifies the conceptual and mathematical similarities and differences between these models. The unified framework shows that existing longitudinal models can be effectively classified based on whether the model posits unique factors and/or dynamic residuals and what types of common factors are used to model changes. The latter is essential to understand how cross-lagged parameters are interpreted. We also present an example using empirical data to demonstrate that there is great risk of drawing different conclusions depending on the cross-lagged models used.
Translational Abstract
In behavioral and psychological research, inferring longitudinal relations between variables (e.g., how does change in one variable affect change in another variable?) is often a focused issue. Various longitudinal cross-lagged models can address the presence, magnitude, and direction of this relation through the cross-lagged parameters. However, these cross-lagged models have been proposed in different contexts and disciplines, and the relations between models have not been systematically discussed in the literature. The present research provides a unified framework, which is characterized by three kinds of equations, to clarify the conceptual and mathematical similarities and differences between these models. This framework is useful for understanding the potential of longitudinal models to assess causal effects. We also present an example using empirical data to demonstrate that there is great risk of drawing different conclusions about longitudinal relations and causality depending on the cross-lagged models that are used.
The Model Confidence Set Hansen, Peter R.; Lunde, Asger; Nason, James M.
Econometrica,
March 2011, Letnik:
79, Številka:
2
Journal Article
Recenzirano
This paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given ...level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS acknowledges the limitations of the data, such that uninformative data yield a MCS with many models, whereas informative data yield a MCS with only a few models. The MCS procedure does not assume that a particular model is the true model; in fact, the MCS procedure can be used to compare more general objects, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine the MCS of the best regression in terms of in-sample likelihood criteria.
•Reviewed 59 studies that applied hydrogeological multi-model approach.•Developing mutually exclusive, collectively exhaustive models remains a challenge.•Conceptual model testing is underutilised ...but can uncover inconsistent assumptions.•Iterative model development and testing accommodate conceptual “surprises”.•Model testing is limited by the independence and information content of data.
Hydrogeological conceptual models are collections of hypotheses describing the understanding of groundwater systems and they are considered one of the major sources of uncertainty in groundwater flow and transport modelling. A common method for characterizing the conceptual uncertainty is the multi-model approach, where alternative plausible conceptual models are developed and evaluated. This review aims to give an overview of how multiple alternative models have been developed, tested and used for predictions in the multi-model approach in international literature and to identify the remaining challenges.
The review shows that only a few guidelines for developing the multiple conceptual models exist, and these are rarely followed. The challenge of generating a mutually exclusive and collectively exhaustive range of plausible models is yet to be solved. Regarding conceptual model testing, the reviewed studies show that a challenge remains in finding data that is both suitable to discriminate between conceptual models and relevant to the model objective.
We argue that there is a need for a systematic approach to conceptual model building where all aspects of conceptualization relevant to the study objective are covered. For each conceptual issue identified, alternative models representing hypotheses that are mutually exclusive should be defined. Using a systematic, hypothesis based approach increases the transparency in the modelling workflow and therefore the confidence in the final model predictions, while also anticipating conceptual surprises. While the focus of this review is on hydrogeological applications, the concepts and challenges concerning model building and testing are applicable to spatio-temporal dynamical environmental systems models in general.
Individual driver's driving behavior plays a pivotal role in personalized driver assistance systems. Gaussian mixture models (GMM) have been widely used to fit driving data, but unsuitable for ...capturing the data with a long-tailed distribution. Though the generalized GMM (GGMM) could overcome this fitting issue to some extent, it still cannot handle naturalistic data which is generally bounded. This paper presents a learning-based personalized driver model that can handle non-Gaussian and bounded naturalistic driving data. To this end, we develop a BGGMM-HMM framework to model driver behavior by integrating a hidden Markov model (HMM) in a bounded GGMM (BGGMM), which synthetically includes GMM and GGMM as special cases. Further, we design an associated iterative learning algorithm to estimate the model parameters. Naturalistic car-following driving data from eight drivers are used to demonstrate the effectiveness of BGGMM-HMM. Experimental results show that the personalized driver model of BGGMM-HMM that leverages the non-Gaussian and bounded support of driving data can improve model accuracy from 23~30% over traditional GMM-based models.
Current approaches to Natural Language Processing (NLP) have shown impressive improvements in many important tasks: machine translation, language modeling, text generation, sentiment/emotion ...analysis, natural language understanding, and question answering, among others. The advent of new methods and techniques, such as graph-based approaches, reinforcement learning, or deep learning, have boosted many NLP tasks to a human-level performance (and even beyond). This has attracted the interest of many companies, so new products and solutions can benefit from advances in this relevant area within the artificial intelligence domain.This Special Issue reprint, focusing on emerging techniques and trendy applications of NLP methods, reports on some of these achievements, establishing a useful reference for industry and researchers on cutting-edge human language technologies.