Download Free Large Sample Inference For Long Memory Processes Book in PDF and EPUB Free Download. You can read online Large Sample Inference For Long Memory Processes and write the review.

Box and Jenkins (1970) made the idea of obtaining a stationary time series by differencing the given, possibly nonstationary, time series popular. Numerous time series in economics are found to have this property. Subsequently, Granger and Joyeux (1980) and Hosking (1981) found examples of time series whose fractional difference becomes a short memory process, in particular, a white noise, while the initial series has unbounded spectral density at the origin, i.e. exhibits long memory.Further examples of data following long memory were found in hydrology and in network traffic data while in finance the phenomenon of strong dependence was established by dramatic empirical success of long memory processes in modeling the volatility of the asset prices and power transforms of stock market returns.At present there is a need for a text from where an interested reader can methodically learn about some basic asymptotic theory and techniques found useful in the analysis of statistical inference procedures for long memory processes. This text makes an attempt in this direction. The authors provide in a concise style a text at the graduate level summarizing theoretical developments both for short and long memory processes and their applications to statistics. The book also contains some real data applications and mentions some unsolved inference problems for interested researchers in the field./a
A discrete-time stationary stochastic process with finite variance is said to have long memory if its autocorrelations tend to zero hyperbolically in the lag, i.e. like a power of the lag, as the lag tends to infinity. The absolute sum of autocorrelations of such processes diverges and their spectral density at the origin is unbounded. This is unlike the so-called weakly dependent processes, where autocorrelations tend to zero exponentially fast and the spectral density is bounded at the origin. In a long memory process, the dependence between the current observation and the one at a distant future is persistent; whereas in the weakly dependent processes, these observations are approximately independent. This fact alone is enough to warn a person about the validity of the classical inference procedures based on the square root of the sample size standardization when data are generated by a long-term memory process.The aim of this volume is to provide a text at the graduate level from which one can learn, in a concise fashion, some basic theory and techniques of proving limit theorems for numerous statistics based on long memory processes. It also provides a guide to researchers about some of the inference problems under long memory.
Long-memory processes are known to play an important part in many areas of science and technology, including physics, geophysics, hydrology, telecommunications, economics, finance, climatology, and network engineering. In the last 20 years enormous progress has been made in understanding the probabilistic foundations and statistical principles of such processes. This book provides a timely and comprehensive review, including a thorough discussion of mathematical and probabilistic foundations and statistical methods, emphasizing their practical motivation and mathematical justification. Proofs of the main theorems are provided and data examples illustrate practical aspects. This book will be a valuable resource for researchers and graduate students in statistics, mathematics, econometrics and other quantitative areas, as well as for practitioners and applied researchers who need to analyze data in which long memory, power laws, self-similar scaling or fractal properties are relevant.
This book is devoted to the analysis of causal inference which is one of the most difficult tasks in data analysis: when two phenomena are observed to be related, it is often difficult to decide whether one of them causally influences the other one, or whether these two phenomena have a common cause. This analysis is the main focus of this volume. To get a good understanding of the causal inference, it is important to have models of economic phenomena which are as accurate as possible. Because of this need, this volume also contains papers that use non-traditional economic models, such as fuzzy models and models obtained by using neural networks and data mining techniques. It also contains papers that apply different econometric models to analyze real-life economic dependencies.
Provides a simple exposition of the basic time series material, and insights into underlying technical aspects and methods of proof Long memory time series are characterized by a strong dependence between distant events. This book introduces readers to the theory and foundations of univariate time series analysis with a focus on long memory and fractional integration, which are embedded into the general framework. It presents the general theory of time series, including some issues that are not treated in other books on time series, such as ergodicity, persistence versus memory, asymptotic properties of the periodogram, and Whittle estimation. Further chapters address the general functional central limit theory, parametric and semiparametric estimation of the long memory parameter, and locally optimal tests. Intuitive and easy to read, Time Series Analysis with Long Memory in View offers chapters that cover: Stationary Processes; Moving Averages and Linear Processes; Frequency Domain Analysis; Differencing and Integration; Fractionally Integrated Processes; Sample Means; Parametric Estimators; Semiparametric Estimators; and Testing. It also discusses further topics. This book: Offers beginning-of-chapter examples as well as end-of-chapter technical arguments and proofs Contains many new results on long memory processes which have not appeared in previous and existing textbooks Takes a basic mathematics (Calculus) approach to the topic of time series analysis with long memory Contains 25 illustrative figures as well as lists of notations and acronyms Time Series Analysis with Long Memory in View is an ideal text for first year PhD students, researchers, and practitioners in statistics, econometrics, and any application area that uses time series over a long period. It would also benefit researchers, undergraduates, and practitioners in those areas who require a rigorous introduction to time series analysis.
This modern and comprehensive guide to long-range dependence and self-similarity starts with rigorous coverage of the basics, then moves on to cover more specialized, up-to-date topics central to current research. These topics concern, but are not limited to, physical models that give rise to long-range dependence and self-similarity; central and non-central limit theorems for long-range dependent series, and the limiting Hermite processes; fractional Brownian motion and its stochastic calculus; several celebrated decompositions of fractional Brownian motion; multidimensional models for long-range dependence and self-similarity; and maximum likelihood estimation methods for long-range dependent time series. Designed for graduate students and researchers, each chapter of the book is supplemented by numerous exercises, some designed to test the reader's understanding, while others invite the reader to consider some of the open research problems in the field today.
This monograph is a gateway for researchers and graduate students to explore the profound, yet subtle, world of long-range dependence (also known as long memory). The text is organized around the probabilistic properties of stationary processes that are important for determining the presence or absence of long memory. The first few chapters serve as an overview of the general theory of stochastic processes which gives the reader sufficient background, language, and models for the subsequent discussion of long memory. The later chapters devoted to long memory begin with an introduction to the subject along with a brief history of its development, followed by a presentation of what is currently the best known approach, applicable to stationary processes with a finite second moment. The book concludes with a chapter devoted to the author’s own, less standard, point of view of long memory as a phase transition, and even includes some novel results. Most of the material in the book has not previously been published in a single self-contained volume, and can be used for a one- or two-semester graduate topics course. It is complete with helpful exercises and an appendix which describes a number of notions and results belonging to the topics used frequently throughout the book, such as topological groups and an overview of the Karamata theorems on regularly varying functions.
This book compiles theoretical developments on statistical inference for time series and related models in honor of Masanobu Taniguchi's 70th birthday. It covers models such as long-range dependence models, nonlinear conditionally heteroscedastic time series, locally stationary processes, integer-valued time series, Lévy Processes, complex-valued time series, categorical time series, exclusive topic models, and copula models. Many cutting-edge methods such as empirical likelihood methods, quantile regression, portmanteau tests, rank-based inference, change-point detection, testing for the goodness-of-fit, higher-order asymptotic expansion, minimum contrast estimation, optimal transportation, and topological methods are proposed, considered, or applied to complex data based on the statistical inference for stochastic processes. The performances of these methods are illustrated by a variety of data analyses. This collection of original papers provides the reader with comprehensive and state-of-the-art theoretical works on time series and related models. It contains deep and profound treatments of the asymptotic theory of statistical inference. In addition, many specialized methodologies based on the asymptotic theory are presented in a simple way for a wide variety of statistical models. This Festschrift finds its core audiences in statistics, signal processing, and econometrics.