Download Free Forecasting High Frequency Volatility Shocks Book in PDF and EPUB Free Download. You can read online Forecasting High Frequency Volatility Shocks and write the review.

This thesis presents a new strategy that unites qualitative and quantitative mass data in form of text news and tick-by-tick asset prices to forecast the risk of upcoming volatility shocks. Holger Kömm embeds the proposed strategy in a monitoring system, using first, a sequence of competing estimators to compute the unobservable volatility; second, a new two-state Markov switching mixture model for autoregressive and zero-inflated time-series to identify structural breaks in a latent data generation process and third, a selection of competing pattern recognition algorithms to classify the potential information embedded in unexpected, but public observable text data in shock and nonshock information. The monitor is trained, tested, and evaluated on a two year survey on the prime standard assets listed in the indices DAX, MDAX, SDAX and TecDAX.
This paper estimates models of high frequency index futures returns using 'around the clock' 5-minute returns that incorporate the following key features: multiple persistent stochastic volatility factors, jumps in prices and volatilities, seasonal components capturing time of the day patterns, correlations between return and volatility shocks, and announcement effects. We develop an integrated MCMC approach to estimate interday and intraday parameters and states using high-frequency data without resorting to various aggregation measures like realized volatility. We provide a case study using financial crisis data from 2007 to 2009, and use particle filters to construct likelihood functions for model comparison and out-of-sample forecasting from 2009 to 2012. We show that our approach improves realized volatility forecasts by up to 50% over existing benchmarks.
Calvet and Fisher present a powerful, new technique for volatility forecasting that draws on insights from the use of multifractals in the natural sciences and mathematics and provides a unified treatment of the use of multifractal techniques in finance. A large existing literature (e.g., Engle, 1982; Rossi, 1995) models volatility as an average of past shocks, possibly with a noise component. This approach often has difficulty capturing sharp discontinuities and large changes in financial volatility. Their research has shown the advantages of modelling volatility as subject to abrupt regime changes of heterogeneous durations. Using the intuition that some economic phenomena are long-lasting while others are more transient, they permit regimes to have varying degrees of persistence. By drawing on insights from the use of multifractals in the natural sciences and mathematics, they show how to construct high-dimensional regime-switching models that are easy to estimate, and substantially outperform some of the best traditional forecasting models such as GARCH. The goal of Multifractal Volatility is to popularize the approach by presenting these exciting new developments to a wider audience. They emphasize both theoretical and empirical applications, beginning with a style that is easily accessible and intuitive in early chapters, and extending to the most rigorous continuous-time and equilibrium pricing formulations in final chapters. Presents a powerful new technique for forecasting volatility Leads the reader intuitively from existing volatility techniques to the frontier of research in this field by top scholars at major universities The first comprehensive book on multifractal techniques in finance, a cutting-edge field of research
We study financial volatility during the Global Financial Crisis and use the largest volatility shocks to identify major events during the crisis. Our analysis makes extensive use of high-frequency financial data to model volatility and to determine the timing within the day when the largest volatility shocks occurred. The latter helps us identify the events that may be associated with each of these shocks, and serves to illustrate the benefits of using high-frequency data. Some of the largest volatility shocks coincide, not surprisingly, with the bankruptcy of Lehman Brothers on September 15, 2008 and Congress's failure to pass the Emergency Economic Stabilization Act on September 29, 2008. Yet, the largest volatility shock was on February 27, 2007, the date when Freddie Mac announced a stricter policy for underwriting subprime loans and a date that was marked by a crash on the Chinese stock market. However, the intraday high-frequency data shows that the main culprit was a computer glitch in the trading system. The days with the largest drops in volatility can in most cases be related to interventions by governments and central banks.
Using realized volatility to estimate conditional variance of financial returns, we compare forecasts of volatility from linear GARCH models with asymmetric ones. We consider horizons extending to 30 days. Forecasts are compared using three different evaluation tests. With data from an equity index and two foreign exchange returns, we show that asymmetric models provide statistically significant forecast improvements upon the GARCH model for two of the datasets and improve forecasts for all datasets by means of forecasts combinations. These results extend to about 10 days in the future, beyond which the forecasts are statistically inseparable from each other.
The global financial crisis has reopened discussion surrounding the use of appropriate theoretical financial frameworks to reflect the current economic climate. There is a need for more sophisticated analytical concepts which take into account current quantitative changes and unprecedented turbulence in the financial markets. This book provides a comprehensive guide to the quantitative analysis of high frequency financial data in the light of current events and contemporary issues, using the latest empirical research and theory. It highlights and explains the shortcomings of theoretical frameworks and provides an explanation of high-frequency theory, emphasising ways in which to critically apply this knowledge within a financial context. Modelling and Forecasting High Frequency Financial Data combines traditional and updated theories and applies them to real-world financial market situations. It will be a valuable and accessible resource for anyone wishing to understand quantitative analysis and modelling in current financial markets.
Greater data availability has been coupled with developments in statistical theory and economic theory to allow more elaborate and complicated models to be entertained. These include factor models, DSGE models, restricted vector autoregressions, and non-linear models.
While it is clear that the volatility of asset returns is serially correlated, there is no general agreement as to the most appropriate parametric model for characterizing this temporal dependence. In this paper, we propose a simple way of modeling financial market volatility using high frequency data. The method avoids using a tight parametric model, by instead simply fitting a long autoregression to log-squared, squared or absolute high frequency returns. This can either be estimated by the usual time domain method, or alternatively the autoregressive coefficients can be backed out from the smoothed periodogram estimate of the spectrum of log-squared, squared or absolute returns. We show how this approach can be used to construct volatility forecasts, which compare favorably with some leading alternatives in an out-of-sample forecasting exercise.
Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly.
Forecasting volatility models typically rely on either daily or high frequency (HF) data and the choice between these two categories is not obvious. In particular, the latter allows to treat volatility as observable but they suffer from many limitations. HF data feature microstructure problem, such as the discreteness of the data, the properties of the trading mechanism and the existence of bid-ask spread. Moreover, these data are not always available and, even if they are, the asset's liquidity may be not sufficient to allow for frequent transactions. This paper considers different variants of these two family forecasting-volatility models, comparing their performance (in terms of Value at Risk, VaR) under the assumptions of jumps in prices and leverage effects for volatility. Findings suggest that daily-data models are preferred to HF-data models at 5% and 1% VaR level. Specifically, independently from the data frequency, allowing for jumps in price (or providing fat-tails) and leverage effects translates in more accurate VaR measure.