Mathematics

Probability Rules

Probability rules are a set of principles used to calculate the likelihood of events occurring in a given situation. These rules include the addition rule, multiplication rule, complement rule, and conditional probability. The addition rule deals with the probability of either of two events happening, while the multiplication rule calculates the probability of both events occurring. The complement rule finds the probability of an event not happening, and conditional probability assesses the likelihood of an event given that another event has occurred.

Written by Perlego with AI-assistance

12 Key excerpts on "Probability Rules"

  • Introduction to Probability
    For instance, in modern mathematics we are taught at the high school level, or even earlier, that sets are “mathematical objects” which behave according to the rules of Boolean Algebra (the Algebra of Sets), and even if the reader is not familiar with this kind of algebra, he must surely have heard about such things as the commutative laws or the associative laws which characterize the behavior of “ordinary” numbers. [The commutative law for addition tells us, for example, that 2 + 3 = 3 + 2, and the associative law for multiplication tells us, similarly, that 2·(3·5) = (2·3)·5.] The same is also true in plane geometry, where we may not be able to define points or lines, yet specify such rules as “There is one and only one line through any two distinct points” or “There is only one line through a given point which is parallel to a given line.” We have made these general observations because they apply also to the study of probability. Probabilities are difficult to define, and as we saw in Chapter 2, most everyone seems to have his own intuitive ideas. This suggests that it may be best to leave the term “probability” undefined, subject only to the restriction that probabilities are “mathematical objects” which must obey certain rules. These rules, basically very simple ones, will be introduced in the next section, and they lead to the mathematical Theory of Probability, or the Calculus of Probability as it is sometimes called. In contrast to Chapter 2, this part of the study of probability is not controversial, for as we shall see, the rules we shall introduce are compatible with the classical probability concept, the frequency interpretation, as well as the subjective approach
  • Introductory Statistics
    Basic Concepts of Probability Alandra Kahl
    1 Department of Environmental Engineering, Penn State Greater Allegheny, PA 15132, USA

    Abstract

    A commonly used statistical measure is the measurement of probability. Probability is governed by both additive and multiplicative rules. These rules determine if events are independent of one another or dependent on previous or related outcomes. Conditional probability governs events that are not independent of one another and helps researchers to better make predictions about future datasets.
    Keywords: Conditional probability, Dependent, Independent, Probability.

    INTRODUCTION

    Probability is a commonly encountered statistical measure that helps to determine the likelihood of the outcome of a specific event. By understanding what the odds are of an event occurring, researchers can make further predictions about future datasets as well as to better understand collected data. The rules of probability govern the way that odds are generated as well as their interpretation. Conditional probability is the analysis of events that are not independent of one another and is frequently utilized to better understand collected datasets and make predictions about future outcomes.
    A probability is a number that expresses the possibility or likelihood of an event occurring. Probabilities may be stated as proportions ranging from 0 to 1 and percentages ranging from 0% to 100%. A probability of 0 implies that an event cannot happen, while a probability of 1 suggests that an event is very likely to happen. A probability of 0.45 (45%) means that the event has a 45 percent chance of happening [26 ].
    A study of obesity in children aged 5 to 10 seeking medical treatment at a specific pediatric clinic may be used to demonstrate the notion of likelihood. All children seen in the practice in the previous 12 months are included in the population (sample frame) described below [27
  • Understanding Uncertainty
    • Dennis V. Lindley(Author)
    • 2013(Publication Date)
    • Wiley
      (Publisher)
    With this terminology, the last form of the multiplication rule reads that your probability of the product of two independent events is the product of their separate probabilities, a result that is attractive because it is easy to remember. Unfortunately, it is true only if the events are independent; otherwise it is wrong, and often seriously wrong. Similarly, (5.2) reads that your probability of the sum of two events is the sum of their separate probabilities. Again, this is true only under restrictions, but this time the restriction is not independence but the requirement that the events be exclusive. Simple as these special forms are, their simplicity can easily lead to errors and are therefore best avoided unless the restrictions that made them valid are always remembered throughout the calculations. The desire for simplicity has often been emphasized, but here is an example where it is possible to go too far and think of the addition and multiplication rules in their simpler forms, forgetting the restrictions that must hold before they are correct. Notice that the restriction, necessary for the simple form of the addition rule, that the events be exclusive, or the disjunction impossible, is a logical restriction, having nothing to do with uncertainty, whereas independence, the restriction with the multiplication rule, is essentially probabilistic. It is perhaps pedantic to point out that the simple form of the addition rule is correct if you judge the disjunction to have probability zero, rather than knowing it is logically impossible, but we will see in §6.8 that it is dangerous to attach probability zero to anything other than a logical impossibility. 5.4 The Basic Rules There are now two rules that your probabilities have to obey: addition and multiplication
  • Statistics
    eBook - ePub

    Statistics

    The Essentials for Research

    In the preceding paragraph we gave a definition of probability based on the á priori assumption of “equally likely” outcomes. We may have a problem. Suppose we don’t know whether or not the outcomes are equally likely. Suppose a coin is biased. How do we determine the probability of heads if heads and tails are not equally likely? In that situation we use an empirical definition of probability. This, too, is a ratio of events, but it is a ratio based on experience over a very large number of trials rather than on an á priori assumption of equally likely alternatives. For example, if we find that a coin comes up heads 70,000 times in 100,000 tosses, then the probability of obtaining a head on a given trial with this particular coin must be approximately .70. This probability differs from .50, so we would have to assume that this particular coin is biased, that when tossing this coin heads and tails are not equally likely outcomes.
    Actually, for our purposes, it is possible to use either definition of probability. The important thing to understand is that a probability is a ratio between zero and one which is assigned to an event according to the likelihood of that event’s occurrence.
    8.2 The Product Rule
    Sometimes we are required to determine the probability of a compound event composed of several constituent events. For example, we may wish to determine the probability of throwing two consecutive heads with an unbiased coin. The probability of the compound event is given by the product of the probabilities for the constituent events. When the constituent events are independent this multiplicative rule may be stated as follows: the probability that all of a group of independent events will occur is the product of the probabilities that the events will occur separately; that is, p(A and B) = p(A) × p(B).
    Independent events are, of course, uncorrelated events. When events are independent it means that the occurrence of one event is not in any way related to the occurrence of the other event. For example, tossing a head on the first throw of a coin is unrelated to the outcome of the second toss. The coin does not remember what it has done. If the coin is unbiased and we have thrown 10 consecutive heads, the probability of obtaining a head again on the eleventh toss is still .50. The coin feels no obligation to compensate for a string of heads by beginning a string of tails.
  • Quantitative Techniques in Business, Management and Finance
    • Umeshkumar Dubey, D P Kothari, G K Awari(Authors)
    • 2016(Publication Date)
    If the total number of possibilities is n, then by definition the probability of either the first or the second event happening is a 1 + a 2 n = a 1 n + a 2 n but a 1 n = P (A) and a 2 n = P (B) Hence, P (A or B) = P (A) + P (B). The theorem can be extended to three or more mutually exclusive events. Thus, P (A or B or C) = P (A) + P (B) + P (C). The addition rule is used for mutually exclusive events, to find the probability of occurrence of one or the other of two joint events. 5.4.7 Multiplication Theorem Statement: If two events A and B are independent, the probability that they both will occur is equal to the product of their individual probabilities. Symbolically, if A and B are independent, then P (A and B) = P (A) × P (B) The theorem can be extended to three or more independent events; thus, P (A, B and C) = P (A) × P (B) × P (C) Proof: If an event A can happen in n 1 ways of which. a 1 are successful and the event B can happen in n 2 ways of which a 2 are successful, we can combine each successful event in the first case with each successful event in the second case. Thus, the total number of successful happenings in both cases is a 1 × a 2. Similarly, the total number of possible cases is n 1 × n 2. Then by definition, the probability of the occurrence of both events is a 1 × a 2 n 1 × n 2 = a 1 n 1 × a 2 n 2 but a 1 n 1 = P (A) and a 2 n 2 = P (B) ∴ P (A and B) = P (A) × P (B) In a similar way, the theorem can be extended to three or more events. The multiplication rule helps to find the joint probability of particular outcomes or events. 5.5 Conditional Probability When we are computing the probability of a particular event A given information about the occurrence of another event B, this probability is referred to as conditional probability
  • Statistical Methods for Communication Science
    • Andrew F. Hayes(Author)
    • 2020(Publication Date)
    • Routledge
      (Publisher)
    As discussed above, probability can be a vexing topic even to the mathematician. The key to succeeding in the derivation of probabilities is staying organized. Some guidelines and rules of probability are also helpful. In this section, I introduce two laws of probability that will serve you well.

    5.2.1 The Additive Law of Probability

    The additive law of probability (or the additive rule of probability) states that the probability that either one of two events A and B will occur is equal to the sum of their individual probabilities minus the probability of them both occurring. Symbolically,
    P
    (
    A o r B
    )
    = P
    ( A )
    + P
    ( B )
    P
    (
    A a n d B
    )
    5.2
    Figure 5.1
    A classroom of thirty students.
    Consider again the classroom with 12 men and 18 women. Suppose 10 of the men are wearing pants, and 14 of the women are wearing pants (see Figure 5.1 for a visual aid). What is the probability of randomly selecting either a man or someone wearing pants? We can answer this problem using equation 5.1if we are very careful. We know there are 30 people, one of whom we will randomly select, so the number of possible events is 30. Given that there are 12 men and 24 people wearing pants, your intuition might tell you that there are 12 + 24 = 36 qualifying events, so the probability of picking a woman or a person wearing pants is (36/30) = 1.2. But that can’t be correct because a probability must be between zero and one. The error becomes apparent when you observe that this strategy double counts the ten men wearing pants as a qualifying event. So we have to subtract out the number of men wearing pants from the numerator of equation 5.1. There are ten men wearing pants, so the numerator becomes 12 + 24 − 10 = 26, and the correct probability is (26/30) = 0.867.
    Now let’s derive this answer using the additive law in equation 5.2. Let A be the probability of picking a man and B be the probability of picking someone wearing pants. So P(A) = (12/30) = 0.40, P(B) = (24/30) = 0.80. From the information above we know that 10 of the men are wearing pants, so P(A and B) = (10/30) = 0.333. Using equation 5.2, P(A or B
  • Applied Univariate, Bivariate, and Multivariate Statistics
    • Daniel J. Denis(Author)
    • 2015(Publication Date)
    • Wiley
      (Publisher)
    2 or otherwise not work and lose their utility, then there is no guarantee that the derived rules will still hold. However, until that day should come, we operate under the assumption that they are true, and proceed to build additional Probability Rules on them as a foundation.

    2.5.4 Conditional Probability

    Conditional probability is a very important topic to both the disciplines of mathematics and statistics proper as well as to applied scientific domains. Conditional probabilities are just as they sound, they are probabilities that are conditional or contingent upon some other event.
    For example, suppose the unconditional probability of getting cancer is equal to 0.10. Now, if I selected an individual at random from the population, and learned that individual has been smoking two packages of cigarettes per day for the past 30 years, we would probably both agree that the probability of cancer for this individual is not equal to the unconditional probability of 0.10. That is, what we have just agreed on is that
    where p(C) is the probability of cancer and p(C/A) is the probability of cancer given addiction to cigarettes. If, on the other hand, the person we randomly selected had mini‐wheats as his favorite cereal without any mention of smoking cigarettes, we would probably agree that
    where again p(C) is the probability of cancer, but now p(C/M) is the probability of cancer given mini‐wheat eating. In this case, we would expect the unconditional probability (p(C)) to equal that of the conditional probability (p(C/M)).
    Conditional probabilities allow us to narrow the sample space so that we may “zero in” on a more well‐defined set of elements and assess their probability. More formally, we can state the conditional probability of an event B given that event A
  • Engineering Mathematics
    eBook - ePub

    Engineering Mathematics

    A Programmed Approach, 3th Edition

    • C W. Evans(Author)
    • 2019(Publication Date)
    • Routledge
      (Publisher)
    For example, not every sample space is bounded. 24.2    THE RULES OF PROBABILITY We can use Venn diagrams, together with the interpretation we have put on probability, to deduce the basic rules of probability. In the sections that follow we shall suppose that E and F are any two events in the sample space S (see Fig. 24.2). To begin with we must employ the basic terminology of set theory 1 Union: E ∪ F = { x | x E or x ∈ F, or both} 2 Intersection: E ∩ F = { x | x ∈ E and x ∈ F } 3 Complement: E′ = { x | x ∈ S but x ∉ E } There are rules of set theory which can be deduced formally from these definitions. However, for our purposes they can be inferred easily from Venn diagrams. Here are the rules: Fig. 24.2 Two. events E and F. E ∪ ​ F = F ∪ ​ E E ∩ F = F ∩ ​ E (E ∪ ​ F) ′ = E ′ ∩ F ′ (E ∩ ​ F) ′ = E ′ U F ′ E ∩ ​ (F ∪ ​ ​ G) = (E ∩ F) ∪ (E ∩ ​ G) E ∪ (F ∩ G) = (E ∪ F[--=PL. GO-SEPARATOR=--]) ∩ ​ (E ∪ G) You might like to draw a few Venn diagrams to convince yourself of the truth of these. Now for the first rule of probability. 24.3    THE SUM RULE The probability that either the event E or the event F (or both) will occur is the sum of the probability that E will occur with the probability that F will occur less the probability that both E and F will occur: P (E ∪ ​ F) = P (E) + P (F) − P (E ∩ ​ F) To see this we merely need to note that the area enclosed by both E and F is the area enclosed by E, together with the area enclosed by F, but less the area of the. overlap E ∪ F, which we would otherwise have counted twice. □  The probability that a drilling machine will break down is 0.35. The probability that the lights will fail is 0.28. It is known that the probability that one or the other (or possibly both) will occur is 0.42. Obtain the probability that both the machine will break down and the lights will fail. Let E be ‘the drilling machine will break down’ and F be ‘the lights will fail’. Then P (E) = 0.35, P (F) = 0.28 and P (E ∪ F) = 0.42
  • Statistics in Engineering
    eBook - ePub

    Statistics in Engineering

    With Examples in MATLAB® and R, Second Edition

    • Andrew Metcalfe, David Green, Tony Greenfield, Mayhayaudin Mansor, Andrew Smith, Jonathan Tuke(Authors)
    • 2019(Publication Date)
    2 Probability and making decisions Three approaches to defining probability are introduced. We explain the fundamental rules of probability and use these to solve a variety of problems. Expected monetary value is defined and applied in conjunction with decision trees. Permutations and combinations are defined and we make a link with the equally likely definitions of probability. We discuss the concept of random digits and their use for drawing simple random samples from a population. See relevant examples in Appendix E: Appendix E.1 How good is your probability assessment? Appendix E.2 Buffon’s needle 2.1    Introduction The Australian Bureau of Meteorology Adelaide Forecast gives the chance of any rain tomorrow, a summer day in South Australia, as 5%. We will see that it is natural to express the chance of an uncertain event, such as rain tomorrow, occurring as a probability on a scale from 0 to 1. If an event is as likely to occur as it is not to occur, then it has a probability of occurrence of 0.5. An impossible event has a probability of 0 and a certain event has a probability of 1. Formally, the Bureau’s chance of 5% is a probability of 0.05, and as this is considerably closer to 0 than 1 we think it is unlikely to rain tomorrow. There are several ways of giving a more precise interpretation. One is to imagine that similar weather patterns to today’s have been observed in Adelaide on many occasions during the Australian summer, and that on 5% of these occasions it has rained on the next day. Another interpretation is based on the notion of a fair bet (formally defined in Section 2.3.3). The weather forecaster thinks that the possibility of paying out $95 if it rains is offset by the more likely outcome of receiving $5 if it is dry. Many engineering decisions are based on such expert assessments of probability. For example, after drilling a well an oil company must decide either to prepare it for oil production or to plug and abandon it
  • A Farewell to Entropy
    eBook - ePub

    A Farewell to Entropy

    Statistical Thermodynamics Based on Information

    • Arieh Ben-Naim(Author)
    • 2008(Publication Date)
    • WSPC
      (Publisher)
    define the term probability. As it turns out, each definition has its limitations. But more importantly, each definition uses the concept of probability in the very definition, i.e., all definitions are circular. Nowadays, the mathematical theory of probability is founded on an axiomatic basis, much as Eucledean geometry or any other branch of mathematics.
    The concept of probability is considered to be a primitive concept that cannot be defined in terms of more primitive concepts, much like a point or a line in geometry are considered to be primitive concepts. Thus, even without having a proper definition of the term, probability is a fascinating and extremely useful concept.
    Probability theory was developed mainly in the 17th century by Fermat (1601–1665), Pascal (1623–1705), Huyghens (1629–1695) and by J. Bernoulli (1654–1705). The main motivation for developing the mathematical theory of probability was to answer various questions regarding games of chance. A classical example is the problem of how to divide the stakes when the game of dice must be stopped (we shall discuss this problem in Section 2.6 ). In this chapter, we shall start with the axiomatic approach to probability theory. We shall discuss a few methods of calculating probabilities and a few theorems that are useful in statistical mechanics.
    2.2 The Axiomatic Approach
    The axiomatic approach to probability was developed mainly by Kolmogorov in the 1930s. It consists of the three elements denoted as { ,F, P }, which together define the probability space. The three elements of the probability space are the following.
    2.2.1 The sample space, denoted This is the set of all possible outcomes of a specific experiment (sometimes referred to as a trial).
    Examples: The sample space of all possible outcomes of tossing a coin consists of two elements = {H, T }, where H stands for head and T stands for tail. The sample space of throwing a die consists of the six possible outcomes = {1, 2, 3, 4, 5, 6}. These are called simple events or “points” in the sample space. In most cases, simple events are equally probable, in which case they are referred to as elementary events
  • Elementary Probability with Applications
    2 Conditional Probability and the Multiplication Rule 2.1  Conditional Probability
    Sometimes we are concerned with probabilities about some portion of the sample space rather than the entire sample space. Here are two examples. The probability that a person has an annual income over $100,000 would be different than the probability that a college graduate has an annual income over $100,000. As of June 27, 2000, John Vander Wal had a season batting average of .292, but his batting average against left-handed pitchers was .188. In both of these examples, we are reducing the sample space. For the first example, the reduced sample space consists of college graduates and in the second example the reduced sample space consists of left-handed pitching opposition. The following example illustrates the concept of conditional probability.
    Example 2.1. One chip is selected at random from a box containing five chips numbered 1, 2, 3, 4, 5. So S = {1,2,3,4,5} is an equally probable sample space. What is the probability of selecting a 1? If we let B = {1}, then
    P ( B ) =
    1 5
    . Now suppose we are told that the outcome is an odd number. Let A = {1, 3, 5}. We are given that the outcome is in A. Now what is the probability of getting a 1? The answer is
    1 3
    . We write this as
    P ( B | A ) =
    1 3
    . The vertical line separating events B and A means “given” and P(B|A) is called the conditional probability of B given A. We reduce the sample space from S to A while maintaining the same relative probabilities between outcomes. Since each outcome in S has an equal probability, the same must be true in the reduced sample space A. That is, we increase the probability of each outcome in A proportionately. So each outcome 1, 3, 5 has probability
    1 5
    relative to S but is increased to
    1 3
    relative to A. Had one outcome in S
  • Introduction to Probability and Statistics
    • Giri(Author)
    • 2019(Publication Date)
    • CRC Press
      (Publisher)
    0 to the “impossible” event.
    3.  If Bn S, n = 1,2,3,…, k are disjoint, then
    μ
    (
    n = 1
    k
    B n
    )
    =
    n = 1
    k
    μ
    (
    B n
    )
    (cf. the additive law of probability).
    Any definition of probability that retains the above accepted rules of computation of probability measures is capable of retaining the algebra of probability developed in earlier sections. The axiomatic definition of probability is one such generalization of the classical definition. In the axiomatic definition of probability the sample space S is treated as an abstract set, and events in the sample space are the “measurable” subsets of S. For an intelligible exposition of the definition we need to know some fundamental results of the algebra of sets and the theory of measures. The following two sections are devoted to these topics.
    2.2.1    The Algebra of Sets
    Definition 2.2.1.1: Set   A set, in an abstract sense, is any well-defined collection of objects. These objects may be a collection of objects, such as numbers, people, dogs, or electric bulbs. An object belonging to a particular set is called an element of the set.
    Sets are commonly represented by capital letters (A,B,C,…), and elements of sets by lowercase letters (a,b,c,…). If an element a belongs to a set A, it is represented as
    a A .
    If, on the other hand, an element does not belong to a set A, it is represented as
    a A .
    To specify that certain objects belong to a set, we use braces
    { }
    . Thus the set
    A =
    {
    1 , 2 , 3
    }
    consists of the numbers 1, 2, 3.
    Sets can be described in two ways. (1) by enumerating or listing all elements in a set (the tabular form of the set), and (2) by describing or defining the elements of a set by a property or a set of properties that are satisfied by the elements of the set but by no element outside the set (the defining form of the set). An example of a tabular form of a set is
    A =
    {
    1 , 2 , 3
    }
    . The corresponding defining form of the set A is A= {a : a is a positive integer less than or equal to 3}. All sets need not have both tabular and defining forms. For example, the set of real numbers between 0 and 1 does not have a tabular form. Its defining form is A = {a : a
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.