Mathematics

Point Estimation

Point estimation is a statistical method used to estimate an unknown parameter of a population based on sample data. It involves using a single value, such as the sample mean or median, to represent the population parameter. The goal is to find the best estimate of the parameter, taking into account factors like bias and variability.

Written by Perlego with AI-assistance

10 Key excerpts on "Point Estimation"

  • Introductory Probability and Statistics
    eBook - ePub

    Introductory Probability and Statistics

    Applications for Forestry and Natural Sciences (Revised Edition)

    • Robert Kozak, Antal Kozak, Christina Staudhammer, Susan Watts(Authors)
    • 2019(Publication Date)
    8     Estimation
    Determining the Value of Population Parameters
    Statistical inference is the process of drawing a conclusion about a population parameter from information obtained in a sample. We can make decisions concerning the value of a parameter or we can estimate the value of a parameter. Decision making or tests of hypothesis will be introduced in Chapter 9 . Statistical estimation , which can be classified as either Point Estimation or interval estimation , is discussed in this chapter. A point estimate is a single numeric value calculated from the information in a sample. An interval estimate yields two numeric values, between which we can reliably expect to find the target parameter.

    8.1    Point Estimation

    A point estimate of a given population parameter is numeric: our ‘best guess’ of the true value. For instance, the point estimate of the population mean, μ , is (the sample mean), which is the sample value of the statistic computed from a sample of size n from the population. The statistic is called an estimator and the single value, , is called an estimate . For example, if the mean height of a sample of 10 western hemlock trees is 15.6 m, 15.6 is the point estimate of the unknown population mean. Similarly, the sample proportion, , is an estimator of the population proportion, p , while (the sample proportion) is an estimate. The sample variance, S 2 , is the estimator of the population variance, σ 2 , while s 2 is the estimate.
    For any population parameter θ, the quality of a point estimator is judged according to the following characteristics:
    1. The estimator should be unbiased . In other words, the mean of the sampling distribution is equal to the population parameter. Thus, when we sample, on average we expect to estimate the true population parameter:
    Based on the Central Limit Theorem (discussed in Chapter 7 ), we can conclude that the sample mean, sample proportion and sample variance are all unbiased estimators. Although the sample standard deviation, s , is not an unbiased estimator, if the sample size, n
  • Quantitative Investment Analysis
    • Richard A. DeFusco, Dennis W. McLeavey, Jerald E. Pinto, David E. Runkle, Mark J. P. Anson(Authors)
    • 2015(Publication Date)
    • Wiley
      (Publisher)
    We next discuss the concepts and tools related to estimating the population parameters, with a special focus on the population mean. We focus on the population mean because analysts are more likely to meet interval estimates for the population mean than any other type of interval estimate.

    4. Point and Interval Estimates of the Population Mean

    Statistical inference traditionally consists of two branches, hypothesis testing and estimation. Hypothesis testing addresses the question “Is the value of this parameter (say, a population mean) equal to some specific value (0, for example)?” In this process, we have a hypothesis concerning the value of a parameter, and we seek to determine whether the evidence from a sample supports or does not support that hypothesis. We discuss hypothesis testing in detail in the reading on hypothesis testing.
    The second branch of statistical inference, and the focus of this reading, is estimation. Estimation seeks an answer to the question “What is this parameter’s (for example, the population mean’s) value?” In estimating, unlike in hypothesis testing, we do not start with a hypothesis about a parameter’s value and seek to test it. Rather, we try to make the best use of the information in a sample to form one of several types of estimates of the parameter’s value. With estimation, we are interested in arriving at a rule for best calculating a single number to estimate the unknown population parameter (a point estimate). Together with calculating a point estimate, we may also be interested in calculating a range of values that brackets the unknown population parameter with some specified level of probability (a confidence interval). In Section 4.1 we discuss point estimates of parameters and then, in Section 4.2, the formulation of confidence intervals for the population mean.

    4.1. Point Estimators

    An important concept introduced in this reading is that sample statistics viewed as formulas involving random outcomes are random variables. The formulas that we use to compute the sample mean and all the other sample statistics are examples of estimation formulas or estimators. The particular value that we calculate from sample observations using an estimator is called an estimate. An estimator has a sampling distribution; an estimate is a fixed number pertaining to a given sample and thus has no sampling distribution. To take the example of the mean, the calculated value of the sample mean in a given sample, used as an estimate of the population mean, is called a point estimate
  • An Introduction to Probability and Statistical Inference
    • George G. Roussas(Author)
    • 2003(Publication Date)
    • Academic Press
      (Publisher)
    Chapter 9

    Point Estimation

    In the previous chapter, the basic terminology and concepts of parametric Point Estimation were introduced briefly. In the present chapter, we are going to elaborate extensively on this matter. For brevity, we will use the term estimation rather than parametric Point Estimation. The methods of estimation to be discussed here are those listed in the first section of the previous chapter; namely, maximum likelihood estimation, estimation through the concepts of unbiasedness and minimum variance (which lead to uniformly minimum variance estimates), estimation based on decision-theoretic concepts, and estimation by the method of moments. The method of estimation by way of the principle of least squares is commonly used in the so-called linear models. Accordingly, it is deferred to Chapter 13 .
    Before we embark on the mathematical derivations, it is imperative to keep in mind the big picture; namely, why do we do what we do? A brief description is as follows. Let X be a r.v. with p.d.f. f (·; θ), where θ is a parameter lying in a parameter space Ω. It is assumed that the functional form of the p.d.f. is completely known. So, if θ were known, then the p.d.f. would be known, and consequently we could calculate, in principle, all probabilities related to X, the expectation of X , its variance, etc. The problem, however, is that most often in practice (and in the present context) θ is not known. Then the objective is to estimate θ on the basis of a random sample of size n from f (·; θ), X 1 ,…,
    Xn .
    Then, replacing θ in f (·; θ) by a “good” estimate of it, one would expect to be able to use the resulting p.d.f. for the purposes described above to a satisfactory degree.

    9.1 Maximum Likelihood Estimation: Motivation and Examples

    The following simple example is meant to shed light to the intuitive, yet quite logical, principle of Maximum Likelihood Estimation.
    EXAMPLE 1
    Let X 1 ,…, X 10 be i.i.d. r.v.’s from the B (1, θ) distribution, 0 < θ < 1, and let x 1 ,…, x 10 be the respective observed values. For convenience, set t = x 1 + · · · + x
  • Applied Medical Statistics
    • Jingmei Jiang(Author)
    • 2022(Publication Date)
    • Wiley
      (Publisher)
    μ . We give the general definition of the Point Estimation.
    Definition 6.4
    Let θ denote a parameter of population X and
    θ ^
    =
    θ ^
    (
    X 1
    ,  
    X 2
    , ,
    X n
    )
    denote a statistic that represents a numerical estimate of θ from the measurements contained in a sample. Any possible
    θ ^
    is then called the point estimator of the parameter θ .
    In fact, according to Definition 6.4, each
    x ¯
    and
    s 2
    shown in Table 6.1 is a point estimator of the population parameters.
    Well, how “good” is the single-valued estimate for the population parameter? We partially addressed these questions in Section 6.1 , where the concept of the sufficient statistic suggested that they can be considered as a good option. What then remains unanswered is how to evaluate the quality of these estimators. We introduce three criteria of point estimators for evaluating the quality of these statistics.
    1. 1. Unbiasedness
    Definition 6.5
    Let
    X 1
    ,  
    X 2
    , ,
    X n
    be a random sample of size n from a population X with parameter θ , regardless of its underlying distribution. Let
    θ ^
    =
    θ ^
    (
    X 1
    ,  
    X 2
    , ,
    X n
    )
    be an estimator of θ . An estimator
    θ ^
    of parameter θ is unbiased if
      (6.13)
    Unbiasedness is a desirable property in a point estimate. Although the estimator
    θ ^
    is unlikely to be exactly equal to parameter θ , the average value of
    θ ^
    over a large number of random samples of size n is θ .
    For example, because random variables
    X 1
    ,  
    X 2
    , ,
    X n
    are independent and have an identical distribution to population X , we have
    E
    (
    X i
    )
    = μ , V a r
    (
    X i
    )
    =
    σ 2
    (
    i = 1 , 2 , , n
    )
    . The sample mean
    X ¯
    is then an unbiased estimator of the population mean μ
  • Statistical Theory
    eBook - ePub

    Statistical Theory

    A Concise Introduction

    Chapter 2 Point Estimation
    DOI: 10.1201/9781003175407-2

    2.1 Introduction

    As we have discussed in Section 1.1 , estimation of the unknown parameters of distributions from the data is one of the key issues in statistics.
    Suppose that
    Y 1
    , ... ,
    Y n
    is a random sample from a distribution
    f θ
    ( y )
    , where
    f θ
    ( y )
    is a probability or a density function for a discrete or continuous random variable, respectively, assumed to belong to a parametric family of distributions
    F θ
    , θ Θ
    . In other words, the data distribution f
    θ
    is supposed to be known up to the unknown parameter(s)
    θ Θ
    . For example, the birth data in Example 1.1 is a random sample from a Bernoulli
    B ( 1 , p )
    distribution with the unknown parameter p. The goal is to estimate the unknown θ from the data. We start from a general definition of an estimator:
    Definition 2.1 (estimator) A (point) estimator
    θ ^
    =
    θ ^
    ( Y )
    of an unknown parameter θ is any statistic used for estimating θ. The value of
    θ ^
    ( y )
    evaluated for a given sample is called an estimate.
    Thus,
    Y ¯
    ,
    Y
    m a x
    = max (
    Y 1
    , ... ,
    Y n
    )
    , and
    Y 3
    log ( |
    Y 1
    | )
    Y 2
    Y 5
    are examples of estimators of θ. This is a general, somewhat trivial definition that still does not say anything about the goodness of estimation. One would evidently be interested in “good” estimators. In this chapter, we firstly present several methods of estimation and then define and discuss their goodness.

    2.2 Maximum likelihood estimation

    This is probably the most used method of estimation of parameters in parametric models. Its underlying idea is simple and intuitively clear. Recall that given a random sample
    Y 1
    , ... ,
    Y n
    ~
    f θ
    ( y ) , θ Θ
    , the likelihood function
    L ( θ ; y )
    defined in Section 1.2 is the joint probability (for a discrete random variable) or density (for a continuous random variable) of the observed data as a function of an unknown parameter(s) θ:
    L ( θ ; y ) =
    i = 1
    n
    f θ
    (
    y i
    )
  • Handbook of Complementary Methods in Education Research
    • Judith L. Green, Judith Green, Gregory Camilli, Patricia B. Elmore, Patricia Elmore, Judith L. Green, Judith L Green, Gregory Camilli, Gregory Camilli, Patricia B. Elmore, Patricia B Elmore(Authors)
    • 2012(Publication Date)
    • Routledge
      (Publisher)
    15 Estimation Juliet Popper Shaffer
    University of California, Berkeley
    DOI: 10.4324/9780203874769-17
    An estimate can be described as “an educated guess about a quantity that is unknown but concerning which some information is available” (Lehmann 1982 ). Estimation abounds in our everyday lives. Polls and surveys are conducted constantly and are used to obtain estimates of many quantities: the proportion of high school graduates in a state, the average income and the distribution of incomes in a particular professional field, the average proficiency of students and how it has changed over time. Intervention research results in estimates of other types of quantities: The difference in average length of life of cancer patients getting different medical treatments, the differences in reading proficiency of students given different types of reading instruction. Every aspect of our lives depends on obtaining appropriate estimates of many quantities.
    Generally, estimates are based on data, consisting of some set of units(e.g. individuals, farms, schools, cameras) with values on quantitative variables of interest (e.g. test scores, crop productivity, neighborhood economic level, image quality). Sometimes we are interested only in describing the data, such as when a teacher obtains test scores on the students in her/his class; in that case, estimation does not come into the picture. The interest in these cases is in the kinds of summaries that give good pictures of the data set; this area is called descriptive statistics or data analysis. Descriptions of numerical quantities usually include some measures of central value and spread. There are many possible measures of each; of the former, the mean and median of the data are often given; of the latter, the variance or standard deviation and the interquartile range are commonly supplied. For some types of data, such as majors of students in a class, proportions are of interest. Other descriptions, involving two or more variables (e.g. student scores on a set of tests), are correlations and other measures of relationship, and mean differences and other differences among subgroups. Graphical representations of many kinds are useful, and can give good visual descriptions of the complete data distribution. Some aspects of data analysis are discussed in Chapter 30
  • Understanding Statistics
    • Bruce J. Chalmer(Author)
    • 2020(Publication Date)
    • CRC Press
      (Publisher)
    The first task for which we use statistical inference is to make statements about parameters using statistics. Of course, if we have access to data for the entire population, no inference is needed; we can simply calculate the parameters in which we are interested. But usually, we must use sample statistics to help us make statements about population parameters. These statements can be of several types.

    Estimation and hypothesis testing

    One type of statement about a parameter is simply to estimate its value. An estimate can be a point estimate, meaning a single value, or an interval estimate, meaning a range of values. For example, we might estimate that the number of people who will die in traffic accidents over a certain weekend will be 450 (a point estimate), or we might estimate the number to be between 430 and 470 (an interval estimate).
    Another type of statement about a parameter derives from a procedure called hypothesis testing. Hypothesis testing is related to estimation, but the emphasis is different. For example, we might wish to determine whether there is any difference in average amount of pain reported after oral surgery between people taking a drug and people receiving a placebo (a substance that has no inherent effect on pain). We are looking for statistical evidence that the drug has some effect on pain beyond the placebo effect. (Curiously, many studies have shown that about a third of people who receive placebos derive considerable pain relief.) The emphasis is not on estimating the degree of difference between drug and placebo, but simply on testing whether there is a difference.

    Determining degree of certainty

    The second task for which we use statistical inference is to determine how certain we are that our statements are true. It may seem odd to make statements if we are not sure they are true. (Actually, the odd part is admitting it!) But as we noted in Chapter 1
  • Statistical Methods for Communication Science
    • Andrew F. Hayes(Author)
    • 2020(Publication Date)
    • Routledge
      (Publisher)
    SEVEN Parameter Estimation DOI: 10.4324/9781410613707-7
    Pick up the newspaper or turn on the television and no doubt you will read about or hear the result of one of the many polls taken daily of the public’s beliefs, attitudes, values, and opinions. Claims such as “Sixty five percent of Americans approve of the way the President is handling the economy,” “48% of people age 40 or over have not started saving for retirement,” “54% of high school kids report that they could obtain marijuana if they wanted it” are not difficult to find in the media. If you look closely at such polls, you will notice that typically they are based on the responses of only a small fraction of the people in the population that the pollsters are making statements about. How it is possible that a researcher can make a statement about an entire population by asking questions of a small, indeed only a tiny fraction of the population of interest?
    The ability to make inferences about the opinions and behavior of large groups by studying only a sample from that group stems from the theory of parameter estimation, the topic of this chapter. Parameter estimation is used for the purpose of making population inferences—inferences about the value of some unknown population characteristic given limited information about the population. The theory of estimation is both elegant and central to statistics. You need to understand it and understand it well.

    7.1 The Theory of Estimation

    Suppose you are a reporter for the Chronicle of Higher Education and are putting together a story on the research activity of a new Department of Communication at Anytown State University. Your goal is to provide information in your story about the mean number of publications that faculty in the department have published over the course of their careers as part of your profile on the department. Suppose there are 8 faculty members in this new department. If you were resourceful enough and had sufficient time, it might be possible for you to determine that number exactly by contacting each of the 8 faculty members, getting a copy of their curriculum vitae, and computing the mean across all N = 8 faculty. (Here I use capital N to denote the size of the population.) If you were able to do so, you would be able to make an unequivocal statement about the parameter of interest—the population mean (μ
  • The Foundations of Statistics

    CHAPTER 15

    Point Estimation

    1 Introduction

    This chapter discusses Point Estimation, and the next two discuss the testing of hypotheses and interval estimation, respectively. Definitions of these processes must be sought in due course; but, for the moment, whatever notions about them you happen to have will afford sufficient background for certain introductory remarks applying equally well to both kinds of estimation and to testing.
    Estimating and testing have been, and inertia alone would insure that they will long continue to be, cornerstones of practical statistics. Their development has until recently been almost exclusively in the verbalistic tradition, or outlook. For example, testing and interval estimation have often been expressed as problems of making assertions, on the basis of evidence, according to systems that lead, with high probability, to true assertions, and Point Estimation has even been decried as ill-conceived because it is not so expressible.
    Wald’s minimax theory has, as was explained in § 9.2, stimulated interest in the interpretation of problems of estimation and testing in behavioralistic terms; to objectivists this has, of course, meant interpretation as objectivistic decision problems. For reasons discussed in § 9.2, it does seem to me that any verbalistic concept in statistics owes whatever value it may have to the possibility of one or more behavioralistic interpretations.
    The task of any such interpretation from one framework of ideas to another is necessarily delicate. In the present instance, there is a particular temptation to force the interpretation, namely, so that criteria proposed by the verbalistic outlook are translated into applications of the minimax theory, that is, of the minimax rule and the sure-thing principle (as expressed by the criterion of admissibility), for these are the only general criteria thus far proposed and seriously maintained for the solution of objectivistic decision problems. Of course it is to be expected, and I hope later sections of this chapter and the next demonstrate, that unforced interpretations do often translate verbalistic criteria into applications of the behavioralistic ones. In evaluating any such interpretations, it must be borne in mind that an analogy of great mathematical value may be valueless as an interpretation; correspondingly, what is put forward as mere analogy should not be taken to be an interpretation, much less branded as a forced one. For example, attention has already been called (in § 11.4) to the danger of regarding the analogy between the theory of two-person games and that of the minimax rule for objectivistic decision problems as an interpretation. In fact, minimax problems are of such mathematical generality that they arise, even within statistics, in contexts other than direct application of the minimax rule to objectivistic decision problems; a striking, though technical, example is Theorem 2.26 of Wald’s book [W3
  • Signal Processing in Radar Systems
    • Vyacheslav Tuzlukov(Author)
    • 2017(Publication Date)
    • CRC Press
      (Publisher)
    t ), within the limits of which we are able to obtain the estimate of parameter with the earlier-given reliability. In the case of point estimate, the root-mean-square deviation of estimate or other convenience function characterizing a deviation of estimate from the true value of estimated random process parameter can be considered as the measure of reliability. From the viewpoint of interval sequential estimation, the estimate reliability can be defined using the length of the confidence interval with a given confidence coefficient.

    11.2 Point Estimate and its Properties

    To make the Point Estimation means that some number γ = γ[x (t )] from the interval ℒ of possible values of the estimated random process parameter l must correspond to each possible received realization x (t ). This number γ = γ[x (t )] is called the point estimate. Owing to the random nature of the point estimate of random process parameter, it is characterized by the conditional pdf p (γ|l ). This is a general and total characteristic of the point estimate. The shape of this pdf defines the quality of point estimate definition and, consequently, all properties of the point estimate. At the given estimation rule γ = γ[x (t )], the conditional pdf p (γ|l ) can be obtained from the pdf of received realization x (t ) based on the well-known transformations of pdf [4 ]. We need to note that a direct determination of the pdf p (γ| l ) is very difficult for many application problems. Because of this, if there are reasons to suppose that this pdf is a unimodal function and is very close to symmetrical function, then the bias, dispersion, and variance of estimate that can be determined without direct definition of the p (γ|l ) are widely used as characteristics of the estimate γ.
    In accordance with definitions, the bias, dispersion, and variance of estimate are defined as follows:
    b
    (
    γ | l
    )
    =
    (
    γ l
    )
    =
    x
    [
    γ
    ( X )
    l
    ]
    p
    (
    X | l
    )
    d X
    ;
    ( 11.11 )
    D
    (
    γ | l
    )
    =
    (
    γ l
    )
    2
    =
    x
    [
    γ
    ( X )
    l
    ]
    2
    p
    (
    X | l
    )
    d X
    ;
    ( 11.12 )
    Var
    (
    γ | l
    )
    =
    [
    γ
    γ
    ]
    2
    =
    x
    [
    γ
    ( X )
    γ
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.