Mathematics

Errors in Hypothesis Testing

Errors in hypothesis testing refer to the potential mistakes that can occur when making conclusions about a population based on sample data. Type I error occurs when a true null hypothesis is rejected, while Type II error occurs when a false null hypothesis is not rejected. These errors are important to consider when interpreting the results of hypothesis tests.

Written by Perlego with AI-assistance

8 Key excerpts on "Errors in Hypothesis Testing"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • Business Statistics I Essentials

    ...The same is true in confidence interval estimation, or in any other statistical inference technique. The explanation for this is that in inferential procedures, we are always working with sample data. And sample data can never yield definitive results due to the presence of sampling error. Errors in conclusions drawn in hypothesis testing are referred to as either Type I or Type II. These are defined as: Type I error - the decision to reject the null hypothesis (H 0) when it is true and should not have been rejected. Type II error - the decision to fail to reject the null hypothesis (H 0) when it is false and should have been rejected. The probability of making a Type I error is denoted by α (alpha) -the level of significance. As we have seen in previous examples, α is a pre-determined value which is specified in a textbook problem or in a real life type situation and is specified by the user(s) of the information. This is the chance that the user(s) of the information are willing to take that the null hypothesis might be rejected in error; i.e. that we might conclude that a difference or relationship exists when it really does not. The choice of this probability should consider the possible consequences of an incorrect decision. For example, in medical research where life sustaining situations are being tested, one would likely be much more conservative in specifying a value for α than in some of the examples presented herein. And even in critical business decisions, we would be very likely to set a conservative value for α. The probability of making a Type II error is denoted by β (beta). This is not a predetermined value set by the user(s) of the information, but a value that can only be estimated once the result of the test is known. This procedure is not included in this text and, for the most part, is not of real concern to decision makers...

  • Business Statistics Using EXCEL and SPSS

    ...Instead, you should say you do not reject H 0. This basically means: ‘hold off judging it, because I don’t know the chances I got this wrong’. Of course, this still means you reject H 1. So in actual practice, most people think the outcome is the same. Type II error is directly associated with the idea of statistical power, which would not be my choice of super-power if I had one. Statistical power is basically the ability your statistical tests have to correctly reject the null hypothesis (i.e. reject it if it is false in the population), and is defined as 1 − β. Power is also an under-appreciated concept in statistics, and one can often detect problems in this regard in published research, even in the best research journals. However, later in this chapter I will explain how to determine the chances of Type II error, and therefore how to determine statistical power. back to basics Box 8.1 Errors of Inference in Hypothesis Testing Type I and Type II errors are termed errors of inference because this name refers to the main task of the hypothesis test: to infer what is happening in the population from a sample. Remember always: the hypothesis refers to the population, but the test is done on a sample. So, there are two main ways you can get this wrong. Either you can accept H 0 when in the population it is not true, or you could accept H 1 when in the population it is not true. Table 8.1 sets out these different error situations. If you look back to the earlier examples of hypothesis testing, you can see that such errors can have more or less serious consequences, depending on the circumstances. But I will come to this in due course. How is it possible to get things wrong? Well, by completely ripping off the great Ronald Fisher, who basically invented this idea, I can explain things to you. Fisher seemed to have a liking for cups of tea, but I prefer beer, so let me go with that...

  • The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation

    ...Brenda Hannon Brenda Hannon Hannon, Brenda Type I Error Type I error 1740 1743 Type I Error In the context of statistical hypothesis testing, a Type I error occurs when the null hypothesis is rejected when, in fact, the null hypothesis should have been accepted. More specifically, a researcher observed a significant difference between two experimental conditions and consequently rejected the null hypothesis when, in truth, the observed significant difference between the two experimental conditions did not occur because of the manipulation, rather it occurred because of random chance. Three everyday examples of Type I errors are when a medical test indicates that a patient has a disease when, in fact, the patient is actually disease-free; when a fire alarm indicates there is a fire when, in fact, there is no fire; and when a jury decides a person is guilty of a crime when, in fact, that person is innocent. Type I errors also occur in educational research; therefore, this entry considers various educational examples to further illustrate Type I errors. Understanding Type I Errors In statistical analysis, hypothesis testing is used to determine whether the variations among different groups can be attributed to a manipulation or random chance. In educational contexts, hypothesis testing generally includes two types of hypotheses: a null hypothesis and alternative hypothesis. A null hypothesis states that the phenomenon or manipulation under investigation produces no effect (i.e., or makes no difference). An alternative hypothesis (also referred to as the research hypothesis) states the opposite of the null hypothesis; that is, an alternative hypothesis states that the phenomenon or manipulation under investigation does produce an effect (i.e., it does make a difference)...

  • The Essentials of Biostatistics for Physicians, Nurses, and Clinicians
    • Michael R. Chernick(Author)
    • 2011(Publication Date)
    • Wiley
      (Publisher)

    ...These ideas will be discussed more thoroughly in the next section. 6.1 TYPE I AND TYPE II ERRORS The type I error or significance level (denoted as α) for a test is the probability that our test statistic is in the rejection region for the null hypothesis, but in fact the null hypothesis is true. The choice of a cutoff that defines the rejection region determines the type I error, and can be chosen for any sample size n ≥ 1. The type II error (denoted as β) depends on the cutoff value and the true difference δ ≠ 0, when the null hypothesis is false. It is the probability of not rejecting the null hypothesis when the null hypothesis is false, and the true difference is actually δ. The larger delta is, the lower the type II error becomes. The probability of correctly rejecting at a given δ is called the power of the test. The power of the test is 1 − β. We can define a power function f (δ) = 1 − β (δ). We use the notation β (δ) to indicate the dependency of β on δ. When δ = 0, f (δ) = α. We can relate to these two types of errors by considering a real problem. Suppose we are trying to show the effectiveness of a drug by showing that it works better than placebo. The type I and type II errors correspond to false claims. The type I error is the claim that the drug is effective when it is not (i.e., is not better than placebo by more than δ). The type II error is the claim that the drug is not effective when it really is effective (i.e., better than placebo by at least δ). However, it increases as | δ | increases (often in a symmetric fashion about 0, i.e., f (δ) = f [− δ ]). Figure 6.1 shows the power function for a test that a normal population has a mean zero versus the alternative that the mean is not zero for sample sizes n = 25 and 100 and a significance level of 0.05. The solid curve is for n = 25, and the dashed for n = 100. We see that these power functions are both symmetric about 0, and meet with a value of 0.05 at δ = 0...

  • Econometrics
    eBook - ePub
    • K. Nirmal Ravi Kumar(Author)
    • 2020(Publication Date)
    • CRC Press
      (Publisher)

    ...We already know, the researcher makes the decision to reject or not reject (accept) the H O with a given probability of being correct. This probability of being correct is referred to as the Level of significance or Probability level of the test of significance, 0.05 (5%) or 0.01 (1%). If 0.05 is set, the researcher will have a 5 percent probability of making a Type I error, whereas, if 0.01 is selected, the researcher will have only 1 percent of probability of committing Type I error. In other words, working at 0.05, the researcher has 95 percent chance of making correct decision; working with 0.01 the researcher will have 99 percent of chance of making correct decision. Let us discuss these two errors here under. Type I error: When we reject the H O, when it is supposed to be correct or true, it implies Type I error. That is, the probability of rejecting the H O when it is true, is called Type I error. Probability of committing Type I error is denoted by ‘α’. So, technically, ‘α’ is known as Type I error. Hence, Type I error is also called Alpha (α) error or Error of First Kind. Type II error: When we accept the H O, even if it is incorrect or false, it implies Type II error. That is, the probability of accepting the H O when it is false or not correct is called Type II error. So, when a false hypothesis is erroneously accepted, it is known as Type II error. Probability of Type II error is denoted by β. So, Type II error is also called Beta (β) error or Error of Second Kind. Example 7.7: The following example simplifies your understanding about this concept. Assume that, a pesticide firm is introducing a new brand of pesticide into the market. Now the firm faces two alternatives, whether the ‘pesticide succeeds in the market’ or ‘pesticide fails in the market’. Accordingly, the pesticide firm takes one of the following two decisions viz., ‘introduce new brand of pesticide into the market’ or ‘does not introduce new brand of pesticide into the market’...

  • A Step-By-Step Introduction to Statistics for Business

    ...The figure again depicts four different sample means, but this time drawn from H 1 : A, B, C and D. If H 1 is true, what would happen to our conclusions from these samples? Figure 7.07 Four theoretical sample means and their position on H 0 when H 0 is false Samples C and D are in the region of rejection. We would reject the null in both cases. H 0 is false, so these are correct decisions. We should have rejected the null, and we did reject the null. Samples A and B are not in the region of rejection. We would retain the null in both cases. H 0 is false, so these are incorrect decisions – we call these Type II errors. We should have rejected the null, but we mistakenly retained it instead. Thus, a Type II error occurs if we incorrectly retain the null when we should have rejected it, because the null was in fact false. The probability that we will commit a Type II error if the null hypothesis were true is represented with the Greek letter beta (β). We can only control beta indirectly; increasing the sample size reduces beta and increases its inverse, called power. Power is the probability that we will correctly reject the null when that null should have been rejected. If we set a more stringent alpha (for example, using α =.01 instead of α =.05), we shift the critical value to the right in a right-tailed test. This reduces the risk that we’ll commit a Type I error if H 0 is true, but it increases the risk that we’ll commit a Type II error if H 1 is true. The goal of good research design is to minimize both Type I errors and Type II errors simultaneously. You have direct control over alpha, so if you want to minimize Type I error, set α low. To minimize Type II error (β) (and maximize power), you’ll need a large sample size...

  • Introducing Research and Data in Psychology
    eBook - ePub

    Introducing Research and Data in Psychology

    A Guide to Methods and Analysis

    • Ann Searle(Author)
    • 2002(Publication Date)
    • Routledge
      (Publisher)

    ...We conclude that our results were significant when really they were not. A Type I error can occur: • Because an experiment is badly designed and the result that has been obtained is actually due to the influence of confounding variables. • Because the levels of significance that have been set are too lenient. A Type II error occurs when we retain the null hypothesis (we retain the idea that the result is due to chance) when really we could accept the alternative hypothesis—we conclude that our results were due to chance when really they were not. There are two reasons for such errors: • Because an experiment is badly designed and the result that has been obtained is actually due to the influence of confounding variables. • Because the levels of significance that have been set are too stringent. With a Type I error you conclude that there is a difference or that there is a correlation when really there isn’t. This can occur because the level of significance set is too lenient. (Type I: ‘I’ looks like ‘1’ for lenient) With a Type II error you conclude that there isn’t a difference or that there isn’t a correlation when really there is. This can occur because the level of significance set is too stringent. (Type II: ‘2’ looks like a backwards ‘S’ for stringent.) If you set the level of significance that must be reached before you will reject the null hypothesis at 0.10 (10 per cent), this is an easier level to reach than 0.05 (5 per cent)—that is it is more lenient. You are saying you will accept the alternative hypothesis when there is actually a 10 per cent chance that the result is a chance occurrence. There is thus a greater risk of making a Type I error. If you set the level of significance at 0.01 (1 per cent), this is a more difficult level to reach than 0.05 (5 per cent)—that is it is more stringent. You are saying you will accept the alternative hypothesis only if statistics show there is a mere 1 per cent probability that the result is a chance occurrence...

  • Statistical Misconceptions
    eBook - ePub
    • Schuyler W. Huck(Author)
    • 2015(Publication Date)
    • Routledge
      (Publisher)

    ...Once there, open the folder for Chapter 8 and click on the link called “The Relationship Between Alpha and Beta Errors.” Then, follow the detailed instructions (prepared by this book’s author) on how to use the Java applet. By doing this assignment, you will demonstrate in a convincing fashion that the risks of Type I and Type II errors can be decreased at the same time. This assignment’s Java applet will also help you understand that Type II error risk is determined by several features of a study beyond the selected level of significance. * Ouyang, R. (n.d.). Basic concepts of quantitative research: Inferential statistics. Retrieved March 23, 2007, from http://ksumail.kennesaw.edu/~rouyang/ED-research/i-statis.htm. † Asraf, R. M., & Brewer, J. K. (2004). Conducting tests of hypotheses: The need for an adequate sample size. Australian Educational Researcher, 31 (1), 79–94. * Appendix B contains references for all quoted material presented in this section. † A Type I error is made if a true null hypothesis is rejected. * This definition comes from the Merriam-Webster Online Dictionary. Retrieved March 25, 2007, from http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=nominal. † A computer simulation I conducted with μ 1, = μ 2,, n 1 = 25, and n 2 = 5 revealed that Type I errors occurred 35.4% of the time over the 100,000 replications of the simulation...