Technology & Engineering

Single Sample T Test

A single sample t-test is a statistical method used to determine if the mean of a single sample is significantly different from a known or hypothesized population mean. It is commonly used in research and engineering to assess the significance of experimental results or to compare a sample mean to a known standard. The t-test provides a way to make inferences about population parameters based on sample data.

Written by Perlego with AI-assistance

4 Key excerpts on "Single Sample T Test"

  • Sensory Evaluation of Food
    eBook - ePub

    Sensory Evaluation of Food

    Statistical Methods and Procedures

    • Michael O'Mahony(Author)
    • 2017(Publication Date)
    • CRC Press
      (Publisher)
    t tests on the Key to Statistical Tables. It must just be borne in mind that “one sample” can mean more than one thing.
    Look up these three t tests on the Key to Statistical Tables to see their relationship to the other tests.
    Who Is Student?
    Student is the pen name of William Sealy Gosset (1876-1937), who worked for the Guinness Brewery in Dublin. He published details of his test in Biometrika in 1908 under a pen name because the brewery did not want their rivals to realize that they were using statistics.
    t Distribution
    The t distribution resembles the normal distribution except that it is flatter (more platykurtic) and its shape alters with the size of the sample (actually the number of degrees of freedom).

    7.2 The One-Sample t Test: Computation

    The one-sample t test is rarely used in sensory analysis and behavioral sciences, but it is included here for the sake of completion. In the one-sample t test we are testing whether a sample with a given mean,
    X ¯
    , came from a population with a given mean μ . In other words, is the mean of a sample
    (
    X ¯
    )
    significantly different from the mean of the population (μ )?; t is calculated using the formula
    t =
    X ¯
    μ
    S
    X ¯
    where
    X ¯
    = mean of the sample
    μ = mean of the population
    S
    X ¯
    = estimate from the sample of the standard error of the mean.
    As discussed in the section on z tests, the standard error is the standard deviation of the sampling distribution, a theoretical distribution of means of samples of size N , drawn from the population of mean μ . This estimate is obtained from the sample S and N using the formula
    S
    X ¯
    = S / N
    .
    Essentially, the calculation is one of seeing whether the difference between the means (X μ ) is large compared to some measure of how the scores might vary by chance [i.e., how they are spread
    (
    S
    X ¯
    )
  • Research Methods and Statistics in Psychology
    t-test can be used: for testing the difference between the mean of a single sample and some hypothesized population mean; for testing the difference between the means for a single sample on two different variables; and for testing the difference between two samples on a single variable.
    In each case we find a value for t that corresponds to a probability that the results are due to random error. This probability corresponds to the level of inferential uncertainty. However, once we have obtained that probability we have some additional decisions to make.
    Having found a t-value and a probability level, there are several choices as to how to treat, interpret and use these. The traditional approach has been to perform significance tests. To understand much of the literature that has already been published in psychology you will need to have some familiarity with this approach. Nowadays, however, important alternatives are available. These techniques include the use of confidence intervals and effect sizes.
    Checklist
    Revisiting the key goals for this chapter
    • □ I understand how to conduct a t-test – a statistical procedure that allows me to establish how likely it is that differences between two means of the size we have observed could be produced by a random process (drawing random samples of specified size from a specified population).
    • □ I know how different types of t-test are used to examine different types of data.
    • □ I understand the process of interpreting the results of t-tests, and the different ways in which this can be done.

    Further reading

    Wilkinson and colleagues (1999) summarize the findings of the American Psychological Association’s Task Force on Statistical Inference, and we recommend that you read this (it can be found online at: www.apa.org/science/leadership/bsa/statistical/tfsi-followup-report.pdf
  • Introduction to Statistics in Human Performance
    • Dale Mood, James Morrow, Jr., Matthew McQueen(Authors)
    • 2019(Publication Date)
    • Routledge
      (Publisher)
    7
    Two-Sample t-test

    INTRODUCTION

    The one-sample case we examined at the end of Chapter 6 contains many of the important concepts of inferential statistics, but it has rather limited application in human performance research and in science in general. As we saw, the primary use of the one-sample case is to compare the mean of a sample (
    X ¯
    ) to a hypothesized mean of a population (μ). We learned that we can compare the two means and either accept the hypothesis that they are not different, except for sampling error (this is called the null hypothesis and symbolized as H0 ), or reject this hypothesis. We also saw that we can even attach various levels of confidence to our decision.
    We learned two ways to reach our decision. First, we learned how to construct a confidence interval around the sample mean and then to check to see whether the hypothesized population mean is located within this interval. If it is, we decide the null hypothesis is true and conclude that the difference between the two means is simply the result of sampling error. If the hypothesized population mean is not located in the interval, we reject the null hypothesis (that the means are not different) and accept the alternative hypothesis (symbolized as H1 ) that they are, in fact, different. Recall that the width of the confidence interval is a function of the level of confidence that is desired, the size of the sample, and the variability of the values in the population (represented by Σ).
    The second way to reach the same conclusion regarding the equality of the means, called hypothesis testing, is to determine where the sample mean is located on a hypothetical sampling distribution. This sampling distribution would, theoretically, be constructed by taking many, many different samples of the same size (N) from the population for which we know µ and σ, calculating the mean of each sample, and plotting them in a frequency distribution. The result would be a normal curve with a mean of µ and a standard deviation (here called the standard error of the mean,
    σ
    X ¯
    ) equal to
    σ
    N
    . It is important to understand that the standard error of the mean is actually the standard deviation of the sampling distribution of the means that were calculated across many, many samples and then plotted. Although we never actually construct this sampling distribution, we do use our knowledge of the normal curve and the value of
    σ
    X ¯
    to locate where our sample mean would reside in the hypothetical sampling distribution. We can create a 95% confidence interval by multiplying the value of
    σ
    X ¯
  • Statistics in Kinesiology
    • Joseph P. Weir, William J. Vincent(Authors)
    • 2020(Publication Date)
    • Human Kinetics
      (Publisher)
    The t test for a single sample produces the ratio of the actual mean difference between the sample and the population to the expected mean difference (that amount of difference between X and μ that can be expected to occur by chance alone). The expected mean difference is estimated by equation 10.02 and is called the standard error of the mean. To interpret t for a single sample, we must first find the degrees of freedom, which can be calculated by the formula df = n − 1. The t ratio is compared with the values for a two-tailed test (one- and two-tailed tests are explained later in this chapter) from the t distribution in table A.3 a for the appropriate degrees of freedom. When t exceeds the value in table A.3 a for a given α level, we may conclude that X was not drawn from the population with a mean of μ. When t is less than the critical ratio in table A.3 a, the null hypothesis (H 0) is accepted; no reliable difference exists between X and μ. When t exceeds the critical ratio, H 0 is rejected and H 1 is accepted; we infer that some factor other than chance is operating on the sample mean. Notice that in table A.3 a when degrees of freedom are large (df > 99), the values for a two-tailed t test at a given α value are the same as the values read from table A.1 for a Z test (1.65, 1.96, 2.24, and 2.58). This technique is useful for determining whether influences introduced by an experiment have an effect on the subjects. If we know or estimate the population parameters and then draw a random sample and treat it in a manner that is expected to alter its mean value, we can determine the odds that the treatment had an effect by using a t test. If t exceeds the critical ratios in table A.3 a, we can conclude that the treatment was effective because the odds are high that the sample is no longer representative of the population from which it was drawn
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.