Statistics for Long-Memory Processes
eBook - ePub

Statistics for Long-Memory Processes

  1. 315 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Statistics for Long-Memory Processes

Book details
Book preview
Table of contents
Citations

About This Book

Statistical Methods for Long Term Memory Processes covers the diverse statistical methods and applications for data with long-range dependence. Presenting material that previously appeared only in journals, the author provides a concise and effective overview of probabilistic foundations, statistical methods, and applications. The material emphasizes basic principles and practical applications and provides an integrated perspective of both theory and practice. This book explores data sets from a wide range of disciplines, such as hydrology, climatology, telecommunications engineering, and high-precision physical measurement. The data sets are conveniently compiled in the index, and this allows readers to view statistical approaches in a practical context. Statistical Methods for Long Term Memory Processes also supplies S-PLUS programs for the major methods discussed. This feature allows the practitioner to apply long memory processes in daily data analysis. For newcomers to the area, the first three chapters provide the basic knowledge necessary for understanding the remainder of the material. To promote selective reading, the author presents the chapters independently. Combining essential methodologies with real-life applications, this outstanding volume is and indispensable reference for statisticians and scientists who analyze data with long-range dependence.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Statistics for Long-Memory Processes by Jan Beran in PDF and/or ePUB format, as well as other popular books in Mathematics & Probability & Statistics. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2017
ISBN
9781351414104
Edition
1

CHAPTER 1

Introduction

1.1 An elementary result in statistics

One of the main results taught in an introductory course in statistics is: The variance of the sample mean is equal to the variance of one observation divided by the sample size. In other words, if X1,...,Xn are observations with common mean μ = E(Xi) and variance σ2 = var(Xi) = E[(Xiμ)2], then the variance of X¯ = n1i=1nXi is equal to
var(X¯)=σ2n1.(1.1)
A second elementary result one learns is: The population mean is estimated by X¯, and for large enough samples the (1 – α)–confidence interval for μ is given by
X¯±zα2σn12(1.2)
if σ2 is known and
X¯±zα2s n12(1.3)
if σ2 has to be estimated. Here s2 = (n1)1i=1n(XiX¯)2 is the sample variance and zα2 is the upper (1α2) quantile of the standard normal distribution.
Frequently, the assumptions that lead to (1.1), (1.2), and (1.3) are mentioned only briefly. The formulas are very simple and can even be calculated by hand. It is therefore tempting to use them in an automatic way, without checking the assumptions under which they were derived. How reliable are these formulas really in practical applications? In particular, is (1.1) always exact or at least a good approximation to the actual variance of X¯? Is the probability that (1.2) and (1.3) respectively contain the true value μ always equal to or at least approximately equal to 1 – α?
In order to answer these questions one needs to analyze some typical data sets carefully under this aspect. Before doing that (in Section 1.4), it is useful to think about the conditions that lead to (1.1), (1.2) and (1.3), and about why these rules might or might not be good approximations.
Suppose that X1, X2,..., Xn are observations sampled randomly from the same population at time points i = 1, 2, ..., n. Thus, X1,..., Xn are random variables with the same (marginal) distribution F. The index i does not necessarily denote time. More generally, i can denote any other natural ordering, such as for example, the position on a line in a plane.
Consider first equation (1.1). A simple set of conditions under which (1.1) is true can be given as follows:
  1. The population mean μ = E(Xi) exists and is finite.
  2. The population variance σ2 = var(Xi) exists and is finite.
  3. X1,..., Xn are uncorrelated, i.e.,
ρ(i,j)=0 for ij,
where
ρ(i,j)=γ(i,j)σ2
is the autocorrelation between Xi and Xj, and
γ(i,j)=E[ (Xiμ)(Xjμ) ]
is the autocovariance between Xi and Xj.
The questions one needs to answer are:
  1. How realistic are these assumptions?
  2. If one or more of these assumption does not hold, to what extent are (1.1), (1.2), and (1.3) wrong and how can they be corrected?
The first two assumptions depend on the marginal population distribution F only. Here, our main concern is assumption 3. Unless specified otherwise, we therefore assume throughout the book that the first two assumptions hold. The situation involving infinite variance and/or mean is discussed briefly in Chapter 11.
Let us now consider assumption 3. In some cases this assumption is believed to be plausible a priori. In other cases, one tends to believe that the dependence between the observations is so weak that it is negligible for all practical purposes. In particular, in experimental situations one often hopes to force observations to be at least approximately independent, by planning the experiment very carefully. Unfortunately, there is ample practical evidence that this wish does not always become a reality (see, e.g., Sections 1.4 and 1.5). A typical example is the series of standard weight measurements by the US National Bureau of Standards, which is discussed in Sections 1.4 and 7.3. This example illustrates that non-negligible persisting correlations may occur, in spite of all precautions. The reasons for such correlations are not always obvious. Some possible “physical” explanations are discussed in Section 1.3 (see also Secti...

Table of contents

  1. Cover
  2. Title Page
  3. Copyright Page
  4. Table of Contents
  5. Preface
  6. 1 Introduction
  7. 2 Stationary processes with long memory
  8. 3 Limit theorems
  9. 4 Estimation of long memory: heuristic approaches
  10. 5 Estimation of long memory: time domain MLE
  11. 6 Estimation of long memory: frequency domair MLE
  12. 7 Robust estimation of long memory
  13. 8 Estimation of location and scale, forecasting
  14. 9 Regression
  15. 10 Goodness of fit tests and related topics
  16. 11 Miscellaneous topics
  17. 12 Programs and data sets
  18. Bibliography
  19. Author index
  20. Subject index