eBook - ePub
Some Basic Theory for Statistical Inference
Monographs on Applied Probability and Statistics
This is a test
- 118 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Some Basic Theory for Statistical Inference
Monographs on Applied Probability and Statistics
Book details
Book preview
Table of contents
Citations
About This Book
In this book the author presents with elegance and precision some of the basic mathematical theory required for statistical inference at a level which will make it readable by most students of statistics.
Frequently asked questions
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlegoâs features. The only differences are the price and subscription period: With the annual plan youâll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weâve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Some Basic Theory for Statistical Inference by E.J.G. Pitman in PDF and/or ePUB format, as well as other popular books in Mathematics & Probability & Statistics. We have over one million books available in our catalogue for you to explore.
Information
CHAPTER ONE
BASIC PRINCIPLES OF THE THEORY OF INFERENCE THE LIKELIHOOD PRINCIPLE SUFFICIENT STATISTICS
In developing the theory of statistical inference, I find it helpful to bear in mind two considerations. Firstly, I take the view that the aim of the theory of inference is to provide a set of principles, which help the statistician to assess the strength of the evidence supplied by a trial or experiment for or against a hypothesis, or to assess the reliability of an estimate derived from the result of such a trial or experiment. In making such an assessment we may look at the results to be assessed from various points of view, and express ourselves in various ways. For example, we may think and speak in terms of repeated trials as for confidence limits or for significance tests, or we may consider the effect of various loss functions. Standard errors do give us some comprehension of reliability; but we may sometimes prefer to think in terms of prior and posterior distributions. All of these may be helpful, and none should be interdicted. The theory of inference is persuasive rather than coercive.
Secondly, statistics being essentially a branch of applied mathematics, we should be guided in our choice of principles and methods by the practical applications. All actual sample spaces are discrete, and all observable random variables have discrete distributions. The continuous distribution is a mathematical construction, suitable for mathematical treatment, but not practically observable. We develop our fundamental concepts, principles and methods, in the study of discrete distributions. In the case of a discrete sample space, it is easy to understand and appreciate the practical or experimental significance and value of conditional distributions, the likelihood principle, the principles of sufficiency and conditionality, and the method of maximum likelihood. These are then extended to more general distributions by means of suitable definitions and mathematical theorems.
Let us consider the likelihood principle. If the sample space is discrete, its points may be enumerated,
Suppose that the probability of observing the point xr is f(xr, θ), and that θ is unknown. An experiment is performed, and the outcome is the point x of the sample space. All that the experiment tells us about θ is that an event has occurred, the probability of which is f(x, θ), which for a given x is a function of θ, the likelihood function.
Consider first the case where θ takes only two values θ0, θ1. To decide between θ0 and θ1, all that the experiment gives us is the pair of likelihoods f(x, θ0), f(x, θ1). Suppose that θ is a random variable, taking the v...
Table of contents
- Cover
- Half Title
- Title Page
- Copyright Page
- Table of Contents
- Preface
- Chapter one Basic Principles of the Theory of Inference, The Likelihood Principle, Sufficient Statistics
- Chapter two Distance between Probability Measures
- Chapter three Sensitivity of a Family of Probability Measures with respect to a Parameter
- Chapter four Sensitivity Rating, Conditional Sensitivity, The Discrimination Rate Statistic
- Chapter five Efficacy, Sensitivity, The CramĂŠrâRao Inequality
- Chapter six Many Parameters, The Sensitivity Matrix
- Chapter seven Asymptotic Power of a Test, Asymptotic Relative Efficiency
- Chapter eight Maximum Likelihood Estimation
- Chapter nine The Sample Distribution Function
- Appendix: Mathematical Preliminaries
- References
- Index