Mathematics

Probability of Combined Events

The probability of combined events in mathematics refers to the likelihood of two or more events occurring together. It is calculated using the principles of probability, such as the multiplication rule for independent events or the addition rule for mutually exclusive events. Understanding the probability of combined events is essential for making informed decisions in various real-world scenarios.

Written by Perlego with AI-assistance

11 Key excerpts on "Probability of Combined Events"

  • Beginning Statistics with Data Analysis
    • Frederick Mosteller, Stephen E. Fienberg, Robert E.K. Rourke, Stephen E. Fienberg, Robert E.K. Rourke(Authors)
    • 2013(Publication Date)
    Lunch seats. Two people sit down to lunch at a square lunch table with room for just one on a side. Use formula (3) to compute the probability that they will sit together at a corner instead of across from one another if they sit down at random.
    8. Matching events and dates. In a multiple-choice question, a student is asked to match three dates with three events. Use formula (3) to compute the probability that sheer guessing will produce three correct answers. (Note: Once two are correctly matched, the third is automatically correct.)
    9 . Blood types. Among 10 people, 6 have blood type U, and 4 have blood type V. Use formula (3) twice to find the probability that two people drawn randomly without replacement from this group include both blood types.
    10. Advertisements and products. The probability that a magazine subscriber will read an advertisement is 0.03, and the probability that anyone who reads an advertisement will try the product is 0.006. Find the fraction of subscribers who will buy the product.

    8-5 PROBABILITIES FOR COMBINED EVENTS

    We think of the outcomes listed in our sample space as elementary events from which other events are constructed. In selecting one card from the deck of cards, we had 52 elementary events or outcomes. The event “ace” is composed by putting together the four elementary events A , A , A and A . The probability of the combined event is the sum of the probabilities of the elementary events, because they do not overlap:
    as we found earlier.
    The probability of an event consists of the sum of the probabilities of its elementary events.
    When the events are equally likely, we merely apply the formula for the ratio of the counts.
             The Combined Event “A and B”. If we have two events A and B, they may or may not have elementary events in common. Figure 8-1(a) shows that the events A and B have elementary events in common. If both event A and event B occur, then one of the elementary events they have in common must occur. We call this combined event “A and B”. Let us represent an elementary event by a point in the sample space. Figure 8-1(a) shows 10 points in the sample space; 5 points belong to A and 5 points to B. Two points are common to both A and B. Two belong to neither A nor B
  • Introductory Statistics
    Basic Concepts of Probability Alandra Kahl
    1 Department of Environmental Engineering, Penn State Greater Allegheny, PA 15132, USA

    Abstract

    A commonly used statistical measure is the measurement of probability. Probability is governed by both additive and multiplicative rules. These rules determine if events are independent of one another or dependent on previous or related outcomes. Conditional probability governs events that are not independent of one another and helps researchers to better make predictions about future datasets.
    Keywords: Conditional probability, Dependent, Independent, Probability.

    INTRODUCTION

    Probability is a commonly encountered statistical measure that helps to determine the likelihood of the outcome of a specific event. By understanding what the odds are of an event occurring, researchers can make further predictions about future datasets as well as to better understand collected data. The rules of probability govern the way that odds are generated as well as their interpretation. Conditional probability is the analysis of events that are not independent of one another and is frequently utilized to better understand collected datasets and make predictions about future outcomes.
    A probability is a number that expresses the possibility or likelihood of an event occurring. Probabilities may be stated as proportions ranging from 0 to 1 and percentages ranging from 0% to 100%. A probability of 0 implies that an event cannot happen, while a probability of 1 suggests that an event is very likely to happen. A probability of 0.45 (45%) means that the event has a 45 percent chance of happening [26 ].
    A study of obesity in children aged 5 to 10 seeking medical treatment at a specific pediatric clinic may be used to demonstrate the notion of likelihood. All children seen in the practice in the previous 12 months are included in the population (sample frame) described below [27
  • Entropy Demystified: The Second Law Reduced To Plain Common Sense (Revised Edition)
    eBook - ePub
    • Arieh Ben-Naim(Author)
    • 2008(Publication Date)
    • WSPC
      (Publisher)
    assign a number, referred to as the probability of that event, which has the following properties:
      (a) The probability of each event is a number between zero and one.
    (b) The probability of the certain event (i.e., that any of the outcomes has occurred) is one.
    (c) The probability of the impossible event is zero.
    (d) If two events are disjoint or mutually exclusive, then the probability of the sum (or union) of the two events is simply the sum of the probabilities of the two events.  
    Condition (a) simply gives the scale of the probability function. In daily life, we might use the range 0–100% to describe the chances of, for example, raining tomorrow. In the theory of probability, the range (0,1) is used. The second condition simply states that if we do perform an experiment, one of the outcomes must occur. This event is called the certain event and is assigned the number one. Similarly, we assign the number zero to the impossible event. The last condition is intuitively self-evident. Mutual exclusivity means that the occurrence of one event excludes the possibility of the occurrence of the second. In mathematical terms, we say that the intersection of the two events is empty (i.e., contains no elementary event).
    For example, the two events:
    Clearly, the events A and B are disjoint; the occurrence of one excludes the occurrence of the other. If we define the event:
    Clearly, A and C, or B and C are not disjoint. A and C contain the elementary event 6. B and C contain the elementary event 5.
    The events, “greater than or equal to 4,” and “smaller than or equal to 2,” are clearly disjoint. In anticipating the discussion below, we can calculate the probability of the first event {4, 5, 6} to be , and the probability of the second event {1, 2} to be ; hence, the combined (or the union) event {1, 2, 4, 5, 6} has the probability , which is the sum of and .
    Fig. (2.1) Fig. (2.2) Fig. (2.3)
  • Simplified Business Statistics Using SPSS
    ) .
    Let A and B be two events. The conditional probability of event A given that event B has occurred denoted by
    P ( A \ B )  
    is defined as
    P
    A \ B
    =
    P ( A B )
    P ( B )
    To apply conditional probability, we will use joint probability table, i.e. one table combining two events.

    5.5.3 Independent Events

    An event B is said to be independent (or statistically independent) of event A, if the conditional probability of B given A, i.e.
    P ( B | A )  
    is equal to the unconditional probability of B.
    P
    B A
    = P ( B )
    Two events A and B are independent if
    P
    A B
    = P ( A ) × P ( B )
    For Independent and conditional events we use joint table. A joint table is one table that combines two events in one table. For example, employment status and gender has been combined into one table.
    Gender Employment Status Total
    Employed Unemployed
    Male 450 473 923
    Female 293 1035 1328
    Total 743 1508 2251
    To create probabilities we divide each observation by the overall total, i.e. we convert the given values to be between 0 and 1. We divide each number with the overall total = 2251 to get a joint probability table.
    Gender Employment Status Total
    Employed Unemployed
    Male 450/2251 = 0.20 473/2251 = 0.21 923/2251 = 0.41
    Female 293/2251 = 0.13 1035/2251 = 0.46 1328/2251 = 0.59
    Total 743/2251 = 0.33 1508/2251 = 0.67 2251/2251 = 1
    The probabilities of the totals represents the probability the respective event for that given total. These are the marginal probabilities.
    P
    M a l e
    =   0.41 ,   P ( F e m a l e )   =   0.59 , P ( E m p l o y e d ) = 0.33 , P ( U n e m p l o y e d ) = 0.67
    The probabilities of intersections will represent the probability of two intersecting events.
  • Equilibrium Statistical Mechanics
    disjoint compound events and simple events, for we add up probabilities in the same way in both cases. It is only when compound events have an intersect that we must use some care. Then we found that (8) is the appropriate way to add probabilities. From now on we can refer to both compound and simple events as “events” and always use Equation (8). If the events happen to be disjoint, then it automatically reduces to (7).
    We have introduced the concepts above so that we can define two final, very important notations that are used repeatedly in statistical mechanics. In fact, the whole purpose of this section is to arrive at the notion of in dependent events. To make the concept precise we must first consider the idea of conditional probability. Conditional probability measures the effect (if any) on the occurrence of an event A when it is known that an event B has occurred in the experiment. For example, we might ask: What is the probability that the card drawn is a heart if we know that it is a number 3? A minute’s thought shows that its being a three has no effect on the probability that it is a heart. In other words, this probability should still be . On the other hand, we might ask: What is the probability that the card is a heart if we know it is a one-eyed jack? Now the fact that it is a one-eyed jack limits the possible suits to either hearts or spades; consequently, the above probability would be (using our usual assumption). Clearly we now have an “effect” on the probability of drawing a heart when we know the event C occurred. We now define the conditional probability P (X | Y ) by
    (9)
    By P (X | Y ) we mean the probability that event X occurs given the fact that event Y occurred in the experiment. Clearly if events X and Y have no points in common (disjoint) so, that P (XY ) = 0, event X cannot occur if event Y
  • Probability Theory and Mathematical Statistics for Engineers
    • V. S. Pugachev(Author)
    • 2014(Publication Date)
    • Pergamon
      (Publisher)
    k = Ω.
    It follows from the axiom of probability addition that if the events A 1 ,…, A n are exclusive and form a complete set then the sum of their probabilities is unity:
    i = 1
    n
    P
    (
    A i
    )
    = 1.
    (1.14)
    (1.14)
    Complementary events are exclusive and form a complete set. Therefore it follows from (1.14) that the sum of the probabilities of complementary events is unity:
    P
    ( A )
    + P
    (
    A ¯
    )
    = 1.
    (1.15)
    (1.15)
    This formula is very important for practice. In many problems it is difficult to calculate the probability of an event while the probability of the complementary event may be easily calculated. In such cases formula (1.15) is useful.
    Passage contains an image

    1.6 Conditional probabilities

    1.6.1 Conditional probability

    The property of frequency multiplication gives a natural way for defining a conditional probability.
    The ratio of the probability of the intersection of events A and B to the probability of the event B , if P (B ) ≠ 0, is called the conditional probability of the event A relative to the event B .
    P ( A
    |
    B )
    =
    P
    (
    A B
    )
    P
    ( B )
    .
    (1.16)
    (1.16)
    For such a definition of conditional probability the theorem of frequency multiplication can evidently be extended to the probabilities:
    P
    (
    A B
    )
    = P
    ( A )
    P ( B
    |
    A )
    = P
    ( B )
    P ( A
    |
    B ) .
    (1.17)
    (1.17)
    Thus the probability of joint appearance of two events is equal to the probability of one of them multiplied by the conditional probability of the other .
    It follows from the definition (1.16) that the conditional probabilities of various events relative to the same event B, P (B
  • Introduction to Probability and Statistics
    • Giri(Author)
    • 2019(Publication Date)
    • CRC Press
      (Publisher)
    Substituting these in the expression above, the theorem follows.  Q.E.D.
    Theorem 2.1.3.4 is called the theorem on multiplication of probabilities.
    Our aim was to develop a calculus of probabilities. The theorem on addition of probabilities (Theorem 2.1.3.1) and the theorem on multiplication of probabilities (Theorem 2.1.3.4) are the two important components of this calculus.
    We have seen that the theorem of addition of probabilities holds only for mutually exclusive events, and that of multiplication of-probabilities holds only for independent events. Mutual exclusiveness and independence of events are thus two essential concepts we need for developing the calculus of probability. Of these, the physical concept of mutual exclusiveness of events is quite clear and follows immediately from definition. Two events A and B are mutually exclusive if the happening of one of them precludes the happening of the other. Physically, this means that the subsets SA and SB of the sample space, defining the events A and B, respectively, do not have any point in common. Such a simple set-theoretical exposition does not seem to be possible for independent events, and this obscures a lucid physical interpretation of independence of events. All that can be said in the way of offering a physical interpretation of independence of events is that a set of events are independent if the probability of any one of them is not affected by supplementary knowledge concerning the materialization of any of the remaining events. But this amounts to giving a physical description of the fact that the conditional probabilities of any of the events given one or more of the remaining events is the same as its unconditional probability.
    Pairwise Independence and Independence
    We have seen that if two events A and B
  • Mathematical Models of Information and Stochastic Systems
    • Philipp Kornreich(Author)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    3  Joint, Conditional, and Total Probabilities
    3.1 CONDITIONAL PROBABILITIES
    Previously, probabilities of individual sets of events were discussed. Here, probabilities of several sets of events will be discussed. Consider two sets of discrete events, set SA {A1 , A2 ,., An ,., An } and set SB {B1 , B2 ,., Bm ,., BM }. For example, spinning a roulette wheel (Figure 3.1 ) and rolling a die can be considered as two sets of events. The set of events SA are the 36 numbers on which the ball of the roulette wheel can land, and events SB are the six events corresponding to the different number of dots of the top face. The events in each set might be mutually exclusive; that is, if event A28 occurs, none of the other events An can occur, and likewise for the events in set B. However, any event in set A can occur together with any event in set B. This was discussed in Chapter 2 . The probability that event An from set A and event Bm from set B will occur is known as the joint probability P(An ∩ Bm ). Here, the symbol ∩ stands for the “intersection” of An and Bm , as was shown in the Venn diagram of Figure 2.3 of Chapter 2 .
    For two statistically independent events An and Bm , the joint probability is equal to the product of the probabilities of the individual events. For example, the events of the ball landing on number 22 on the roulette wheel and a die landing so that the surface with 5 dots is on top are statistically independent events. The ball landing on any number does not depend on how the die will land and the die showing any number of dots does not depend on which number the roulette ball will land. As mentioned before, by two independent events is meant that one event can occur whether the second event occurred or not.
    P
    (
    A n
    B m
    )
    = P
    n
    Q m
    (3.1)
    In general, events An and Bm are not necessarily mutually exclusive. From the Venn diagram of Figure 2.4 of Chapter 2
  • Risk Assessment and Decision Analysis with Bayesian Networks
    Once we know each of these individual probability values, then we have seen that we can calculate the probability of any event by simply summing up the relevant probabilities. In principle, this gives us a complete and quite simple solution to all our risk prediction problems. But, of course there is a bit of a snag. The total number of combinations of individual probability values that we need to know is
    (Number of states in A 1 ) ×
    (Number of states in A 2 ) ×
    (Number of states in A 3 ) × etc.
    In the disease diagnosis problem of Example 5.10 the number of these state combinations is just 4. But, suppose that in the previous example A has 5 states, B has 2 states, C has 5 states, D has 1000 states, and E has 20 states. Then the number of state combinations is 1 million.
    If there are many variables relevant for a problem then it will generally be impossible to define the joint probability distribution in full. The same is true even if there are just a small number of variables that include some that have a large number of states.
    Much of what follows in this book addresses this issue in one way or another. Specifically, the challenge is to calculate probabilities efficiently, and in most realistic situations this means finding ways of avoiding having to define joint probability distributions in full.
    We next make a start on this crucial challenge. 5.4Independent Events and Conditional Probability
    So, where we have joint events, we can regard the outcomes that result as one big event and we can work out all the probabilities we need from the joint probability distribution. We have just seen some examples of this. It is possible to go a long way with this approach. In fact every problem of uncertainty can ultimately be reduced to a single experiment, but as just discussed, the resulting joint probability distributions may be too large to be practicable. In fact, even in relatively simple situations it turns out that using the joint probability distribution may be unnecessarily complex as the following examples illustrate.
  • A Farewell to Entropy
    eBook - ePub

    A Farewell to Entropy

    Statistical Thermodynamics Based on Information

    • Arieh Ben-Naim(Author)
    • 2008(Publication Date)
    • WSPC
      (Publisher)
    B are said to be independent, if and only if
    For example, if two persons who are far apart from each other throw a fair die each, the outcomes of the two dice are independent in the sense that the occurrence of, say, “5” on one die, does not have any effect on the probability of occurrence of a result, say, “3” on the other. On the other hand, if the two dice are connected by an inflexible wire, the outcomes of the two results could be dependent. Similarly, if we throw a single die consecutively, and at each throw, the die is deformed or damaged, the outcomes would not be independent. Intuitively, it is clear that whenever two events are independent, the probability of the occurrence of both events, say, “5” on one die and “3” on the other, is the product of the two probabilities. The reason is quite simple. By tossing two dice simultaneously, we have altogether 36 possible elementary events (the first die's outcome “i” and the second die's outcome “j”). Each of these outcomes has equal probability of 1/36 which is also equal to 1/6 times 1/6, i.e., the product of the probabilities of each event separately.
    A fundamental concept in the theory of probability is the conditional probability. This is the probability of the occurrence of an event A given that an event B has occurred. We write this as Pr{A/B} (read: probability of A given the occurrence of B),7 and define it by
    Clearly, if the two events are independent, then the occurrence of B has no effect on the probability of the occurrence of A. Hence, from (2.5.1) and (2.5.2), we get
    We can define the correlation between the two events as8
    We say that the two events are positively correlated when g(A, B) > 1, i.e., the occurrence of one event enhances or increases the probability of the second event. We say that the two events are negatively correlated when g(A, B) < 1, and that they are uncor-related (sometimes referred to as indifferent) when g(A, B
  • Theory of Probability
    • Boris V. Gnedenko(Author)
    • 2018(Publication Date)
    • Routledge
      (Publisher)
    which P (A B) = P (A) P (B | A) = P (B) P (A | B), (2) i.e., the probability of the product of two events equals the product of the probability of one of them by the conditional probability of another under condition that the first one has occurred. The theorem of probabilities multiplication is applicable when one of the events is impossible. So, if P (A) = 0 then P (A | B) = 0 and P (AB) = 0. It is said that an event A is independent of an event B if the relation P (A | B) = P (A). (3) holds, i.e., if the occurrence of the event B does not affect the probability of the event A. 9 If the event A is independent of the event B, then it follows from (2) that P (A) P (B | A) = P (B) P (A). From this, one. finds P (B | A) = P (B), (4) i.e., the event B is also independent of A. Thus, under the assumption, independence of events is mutual. If independence of events A and B is defined via the equality P (A B) = P (A) P (B), then this definition is always correct, including the situation where P (A) = 0 or P (B) = 0. Let us next generalize the notion of the independence of two events to that of a collection of events. The events B 1, B 2, …, B n are called collectively independent if for any one of them, say, B p, and arbitrary other events from them B i 1, B i 2, …, B i r (i n, ≠ p), the. events B p and B i 1, B i 2, …, B i r are mutually independent. By virtue of what has just been said, this definition is equivalent to the following: for any 1 ⩽ i 1 < i 2 < ⋯ < i r ⩽ s and r (1 ⩽ r ⩽ s) P (B i 1 B i 2 ⋯ B i r) = P (B i 1) P (B i 2) ⋯ P (B i r). Note that for collective independence of several events it is not sufficient that they be pairwise independent. The following simple example shows this. Imagine that three faces of a tetrahedron are colored red, green, blue, and the fourth face has sectors with all of these three colors. Let A be the event that the face on which it lands contains red, B the event that it contains green, and C the event that it contains blue
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.