Download Free Multiple Model Adaptive Estimation For Time Series Analysis Book in PDF and EPUB Free Download. You can read online Multiple Model Adaptive Estimation For Time Series Analysis and write the review.

Multiple Model Adaptive Estimation (MMAE) is a Bayesian technique that applies a bank of Kalman filters to predict future observations. Each Kalman filter is based on a different set of parameters and hence produces different residuals. The likelihood of each Kalman filter's prediction is determined by a magnitude of the residuals. Since some researchers have obtained good forecasts using a single Kalman filter, we tested MMAE's ability to make time series predictions. Our Kalman filters have a dynamics model based on a Box-Jenkins Auto-Regressive Moving Average (ARMA) model and a measure model with additive noise. The time-series prediction is based on the probabilistic weighted Kalman filter predictions. We make a probability interval about that estimate also based on the filter probabilities. In a Monte Carlo analysis, we test this MMAE approach and report the results based on many different criteria. Our analysis tests the robustness of the approach by testing its ability to make predictions when the Kalman filter dynamics models did not match the data generation time-series model. Our analysis indicates benefits in applying multiple model adaptive estimation for time series analysis.
This article develops statistical methodology for semiparametric models for multiple time series of possibly high dimension N. The objective is to obtain precise estimates of unknown parameters (which characterize autocorrelations and cross-autocorrelations) without fully parameterizing other distributional features, while imposing a degree of parsimony to mitigate a curse of dimensionality. The innovations vector is modelled as a linear transformation of independent but possibly non-identically distributed random variables, whose distributions are nonparametric. In such circumstances, Gaussian pseudo-maximum likelihood estimates of the parameters are typically √n-consistent, where n denotes series length, but asymptotically inefficient unless the innovations are in fact Gaussian. Our parameter estimates, which we call 'adaptive,' are asymptotically as first-order efficient as maximum likelihood estimates based on correctly specified parametric innovations distributions. The adaptive estimates use nonparametric estimates of score functions (of the elements of the underlying vector of independent random varables) that involve truncated expansions in terms of basis functions; these have advantages over the kernel-based score function estimates used in most of the adaptive estimation literature. Our parameter estimates are also √n -consistent and asymptotically normal. A Monte Carlo study of finite sample performance of the adaptive estimates, employing a variety of parameterizations, distributions and choices of N, is reported.
This is the new and totally revised edition of Lütkepohl’s classic 1991 work. It provides a detailed introduction to the main steps of analyzing multiple time series, model specification, estimation, model checking, and for using the models for economic analysis and forecasting. The book now includes new chapters on cointegration analysis, structural vector autoregressions, cointegrated VARMA processes and multivariate ARCH models. The book bridges the gap to the difficult technical literature on the topic. It is accessible to graduate students in business and economics. In addition, multiple time series courses in other fields such as statistics and engineering may be based on it.
This paper was motivated by a Decision Sciences article (v. 10, no. 2, 232-244(April 1979)) that presented comparisons of the adaptive estimation procedure (AEP), adaptive filtering, the Box-Jenkins (BJ) methodology, and multiple regression analysis as they apply to time-series forecasting with single-series models. While such comparisons are to be applauded in general, it is demonstrated that the empirical comparisons of the above paper are quite misleading with respect to choosing between the AEP and BJ approaches. This demonstration is followed by a somewhat philosophical discussion on comparison-of-methods techniques.
Analysis of Economic Time Series: A Synthesis integrates several topics in economic time-series analysis, including the formulation and estimation of distributed-lag models of dynamic economic behavior; the application of spectral analysis in the study of the behavior of economic time series; and unobserved-components models for economic time series and the closely related problem of seasonal adjustment. Comprised of 14 chapters, this volume begins with a historical background on the use of unobserved components in the analysis of economic time series, followed by an Introduction to the theory of stationary time series. Subsequent chapters focus on the spectral representation and its estimation; formulation of distributed-lag models; elements of the theory of prediction and extraction; and formulation of unobserved-components models and canonical forms. Seasonal adjustment techniques and multivariate mixed moving-average autoregressive time-series models are also considered. Finally, a time-series model of the U.S. cattle industry is presented. This monograph will be of value to mathematicians, economists, and those interested in economic theory, econometrics, and mathematical economics.
Stationary stochastic process and their properties in the time domain; The frequency domain; State space models and the kalman filter; Estimation of autoregressive moving average models; Model building and prediction; Selected topics in time series regression.