Mathematics

Conditional Probability

Conditional probability is the likelihood of an event occurring given that another event has already occurred. It is calculated by dividing the probability of the intersection of the two events by the probability of the given event. This concept is fundamental in understanding the relationship between events and is widely used in fields such as statistics, machine learning, and decision-making.

Written by Perlego with AI-assistance

8 Key excerpts on "Conditional Probability"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • A User's Guide to Business Analytics

    ...Informally, that means there is some dependence on the conditioning event A and the future event B. It is not strictly necessary that the conditioning event should occur prior to the event whose probability we are trying to determine. Indeed, the conditioning event may occur simultaneously with the event in question, or even at a later time (as we will see in the examples involving Bayes’ theorem). For the purpose of initial illustration, however, it helps to visualize the conditioning event as being a part of the outcome at a previous stage of the experiment. We can now also attempt to answer the questions 5.11 (d) and 5.12 (e). These questions can now be understood as Conditional Probability questions. The reader can easily verify that the answers to these two questions are 1800/4000 = 9/20 and 1600/2500 = 16/25, respectively. When we are determining the Conditional Probability of B given A, the information that A has occurred is already available, so we know that the simple event that was generated by the experiment belongs to the event A. This effectively reduces the sample space to the number of simple events that are favorable to A. When we ask, in addition, whether B has also occurred, we are essentially asking whether the simple event that has occurred is from the set of events favorable to A ∩ B. This argument leads to the Conditional Probability formula P r (B | A) = P r (A ∩ B) P r (A). (5.7) A graphical representation of this is given in Figure 5.2. Obviously, this requires that Pr (A) > 0. Notice that the above equation leads to the general sequential rule for probabilities through the relation P r (A ∩ B) = P r (A) P r (B | A). (5.8) The Conditional Probability function is also a regular probability function in that it satisfies the three axioms of probability. Example 5.13. Given Pr (A) = 0.7, Pr (B) = 0.5 and Pr ([ A ∪ B ] c) = 0.1, find Pr (A | B). From the definition, Pr (A | B) = Pr (A ∩ B)/ Pr (B)...

  • Probability, Statistics, and Data
    eBook - ePub

    Probability, Statistics, and Data

    A Fresh Approach Using R

    ...This new information requires us to reconsider the probability that the other event occurs. For example, suppose that you roll two dice and one of them falls off of the table where you cannot see it, while the other one shows a 4. We would want to update the probabilities associated with the sum of the two dice based on this information. The new probability that the sum of the dice is 2 would be 0, the new probability that the sum of the dice is 5 would be 1/6 because that is just the probability that the die that we cannot see is a “1,” and the new probability that the sum of the dice is 7 would also be 1/6 (which is the same as its original probability). Formally, we have the following definition. Definition 2.4. Let A and B be events in the sample space S, with P (B) = 0. The Conditional Probability of A given B is P (A | B) = P (A ∩ B) P (B) We read P (A | B) as “the probability of A given B.” It is important to keep straight in your mind that the fixed idiom P (A | B) means the probability of A given B, or the probability that A occurs given that B occurs. P (A | B) does not mean the probability of some event called A | B. In mathematics, the vertical bar symbol | is used for Conditional Probability, and you would. write A ∪ B for the event “A or B.” In R, the vertical bar denotes the or operator. You do not use the vertical bar in R to work with Conditional Probability. The general process of assuming that B occurs and making computations under that assumption is called conditioning on B. Note that in order to condition on B in the definition of P (A | B), we must assume that P (B) = 0, since otherwise we would get 0 0, which is undefined. This also makes some intuitive sense. If we assume that a probability zero event occurs, then probability of further events conditioned on that would need to be undefined. Example 2.15. Two dice are rolled...

  • Probability in Petroleum and Environmental Engineering
    • George V Chilingar, Leonid F. Khilyuk, Herman H. Reike(Authors)
    • 2012(Publication Date)

    ...The measure of possibility of occurrence of a certain event A, provided that some other event B appeared in the experiment, may be different from the value of probability P (A). From the previous considerations, it is clear that the Conditional Probability P (A|B) defined above is a suitable measure for that possibility. To justify that, it is possible to use the statistical definition of probability. For that purpose assume that an experiment of interest can be repeated n times. Suppose that k n (B) is the number of experiments for which the event B occurs, and k n (AB) is the number of experiments for which A and B occur jointly. The ratio k n (AB)/ k n (B) denoted by v n (A | B) is called the conditional frequency of event A for a given event B. If the considered experiment is stochastically stable, then the quantities k n (AB) and k n (B) only slightly depend on outcomes of particular trials. Hence, the conditional frequency v n (A | B) almost does not depend on these outcomes. This value indicates how often one can expect an appearance of event A in a series of trials in which B occurs. Therefore, if n is sufficiently large, then v n (A | B) can be chosen as a measure of possibility of A in the realizations of a given experiment in which B occurs. This measure, however, is not convenient for theoretical compositions and particular computations because it depends on n. It can be easily modified in a more convenient way in terms of probability: (5.2) For stochastically stable experiments and large n, Note that neither P (AB) nor P (B) depend on n. Thus, taking into consideration Eq. 5.2, one can choose the value P (AB)/ P (B) as a measure of possibility of event A in a given experiment provided that B occurs in this experiment. According to definition 5.1, it is P (A|B). In other words, the Conditional Probability P (A|B) is a measure of the possibility of realization of event A in a given experiment if B occurs in this experiment...

  • Statistics for the Behavioural Sciences
    eBook - ePub

    Statistics for the Behavioural Sciences

    An Introduction to Frequentist and Bayesian Approaches

    • Riccardo Russo(Author)
    • 2020(Publication Date)
    • Routledge
      (Publisher)

    ...The probability tree in Figure 3.6 displays these probabilities. From the tree it is expected that 85.5% of the people in the population will be free of the disease and classified as such by the test; 9.5% of the people will be free of the disease, but classified as sick; 0.5% of the people will be sick and classified as healthy; and finally, 4.5% of the people will be sick and classified as sick. Figure 3.6 Probability tree for the attentional disorder example. 3.6 Conditional Probability Conditional Probability refers to the probability that an event B occurs given that an event A has occurred. This is denoted as: P (B | A) i.e., the probability of B given A. Venn diagrams can be useful in understanding how to calculate conditional probabilities. Let us go back to the die rolling example where we had event A, i.e., “obtaining an even number when rolling a die”, and event B, i.e., “obtaining a number equal to or larger than 5 when rolling a die”. To know the probability of obtaining a number equal to or larger than 5, given that the obtained number is even, i.e., P(B | A), we first need to identify the number of simple events constituting event A, i.e., 3 which are {2, 4, 6}. Then, we need to identify the number of simple events that constitute B and which are also included in A, i.e., 1 which is {6}. To calculate P(B | A) we need to take the number of simple events constituting the event B that are also included in the set of simple events making up event A, i.e., 1, and then dividing this value by n (A) = 3. It then follows that: P (B | A) = 1 n (A) = 1 3. What are the simple events constituting the event B and that are also included in the set of simple events making up event A ? If we inspect the Venn diagram in Figure 3.2 it appears that these simple events are those making up the intersection of events A and B, i.e., A ∩ B...

  • Probability
    eBook - ePub

    Probability

    A Philosophical Introduction

    • D.H. Mellor(Author)
    • 2004(Publication Date)
    • Routledge
      (Publisher)

    ...7 Conditionalisation I Conditional Probability I n this chapter I shall develop a widely-accepted account of how evidence about contingent propositions should affect our credences in them, an account which does not postulate any confirmation relations. It assumes instead that, for all propositions A and B, the Conditional Probability P (A|B) will, on any reading of it, satisfy the equation, (1.2), which gives P (A|B) a value fixed by the values of the unconditional probabilities P (A∧B) and P (B), provided only that P (B) is greater than 0. It is now time to ask why we should accept (1.2). The usual answer to this question is that it is true by definition, because it defines P (A|B). However, this reading of (1.2) does not explain why P (A|B) has the implications it was credited with when the concept of Conditional Probability was introduced in chapter 1. VII. It cannot for example follow just from (1.2) that, as stated in 1. VII, A will be independent of B, ‘meaning that B tells us nothing about A's prospects of being true,’ if and only if P (A∧B)/ P (B) is equal to A's un Conditional Probability P (A). More generally and importantly, (1.2) does not entail that, read epistemically, P (A|B) measures the credence in A that is justified by learning that B is true. This poses a dilemma. On the one hand, imposing that epistemic entailment makes (1.2) a substantive thesis about conditional probabilities so understood: it can no longer be made true merely by definition. On the other hand, if we use (1.2) to define P (A|B), we cannot take its epistemic application for granted, as we have so far done. It may not matter which we do, but we cannot have it both ways. We cannot assume that an epistemic EP (A|B) will, by mere definition, both satisfy (1.2) and measure the credence in A which evidence B justifies...

  • Introduction to Statistics for Forensic Scientists
    • David Lucy(Author)
    • 2013(Publication Date)
    • Wiley
      (Publisher)

    ...9 Conditional Probability and Bayes’ theorem In Section 3.1 we examined probabilities for events which were independent. Independence implied that to calculate the probabilities for joint events (the coincidence of two or more events) the probabilities for the single events could be multiplied. In this chapter, the more common case, events are not thought to be independent, will be examined. This will eventually lead to the same idea for evidence evaluation as that seen in Section 8.3, although a more mathematical approach will be taken. 9.1 Conditional Probability According to Lee (2004, p. 5) all probabilities are conditional probabilities, that is, the probability of a coin landing with the heads face uppermost being 0.5 is conditional on it being a fair coin. As we saw in Section 3.1.4, even simple die throwing systems can force us to think explicitly in terms of conditional probabilities, so it might be expected that consideration of empirical data would entail the further exploration of Conditional Probability. Rogers et al. (2000) give an analysis which uses the rhomboid fossa as an indicator of the sex of unknown skeletalized human remains. The rhomboid fossa is a groove which sometimes occurs on one end of the clavicle as a result of the attachment of the rhomboid ligament, and on skeletalized remains is more frequent in males than females. The contingency table † shown in Table 9.1 has been reconstructed from Rogers et al. (2000, Table 2 ‡), and is simply a count of how many individuals fall in each category, for example, the authors counted 155 male skeletons with a rhomboid fossa, and 101 female skeletons without one...

  • Probability
    eBook - ePub
    • Darrell P. Rowbottom(Author)
    • 2015(Publication Date)
    • Polity
      (Publisher)

    ...When someone (truthfully) says ‘I will probably come’ regarding an event, like a philosophy seminar, they are typically working on the basis of their relevant personal background information. If they later said ‘I probably won't be able to make it’, this would normally be because their background information had changed. They might have learned something new, e.g. that they'd become ill or that there was the possibility of a hot date instead. (Hot dates are usually better than philosophy seminars. Trust me.) However, there is nothing to stop us defining unconditional probabilities in terms of conditional probabilities. One nice trick suggested by Popper, for example, is to define the unconditional logical probability of p as the logical probability of p conditional on any tautology, T. (Examples of tautologies, for those unfamiliar with logic, are ¬(p & ¬ p) or ‘It is not the case that p and not-p are true’ and p ∨ ¬ p or ‘Either p is true or not-p is true’. These are called the law of non-contradiction and the law of the excluded middle, respectively. They are true in all logically possible worlds.) In short, Popper said that P(p) should be understood to represent P(p, T). That's no objection to writing ‘P(p, T)’ as ‘P(p)’ in so far as the mathematics is concerned. 4    Logical Probabilities and Beliefs Before we press on, let's pause for a moment to think about how logic relates to beliefs. This is worth doing because some people talk about the logical interpretation as if it concerns only what we should believe, although this is misleading...

  • Statistics for Finance
    • Erik Lindström, Henrik Madsen, Jan Nygaard Nielsen(Authors)
    • 2018(Publication Date)

    ...e.g., P (x i) = P (x i) P (Ω). ⁢ (B.5) When we have conditioned on the event z j, we know that z j has occurred, hence z j now is the sample space. This explains the normalisation by ℙ(Z = z j) in (B.4). The fraction of x i that can occur, given the fact that z j has occurred, is given x i ∩ z j. The definition of the conditional expectation for discrete stochastic variables is E [ X | Z = z j ] = Σ x i P (X = x i | Z = z j). ⁢ (B.6) The (unconditional) expectation of a stochastic variable X is given. by E [ X ] = ∫ Ω X (ω) d P (ω) ⁢ (B.7) where the integration is taken over the entire sample space, with respect to the measure (distribution) ℙ. This covers the case where no prior knowledge of the outcome ω is available. Now assume that we know that ω ∊ B, and ℙ(ω > 0. As a preliminary definition of conditional expectation we have the following: Definition B.8 (Conditional expectation given a single event). Given a probability space (Ω, F, ℙ) assume that B ∊ F with ℙ (B) > 0. The conditional expectation of X given B is defined by E [ X | B ] = 1 ℙ (B) ∫ B X (ω) d ℙ (ω). ⁢ (B.8) Note that this definition is very similar to the definition of conditional probabilities given in (B.4), and with a similar interpretation. This definition is now generalized to the case where the conditioning argument is a partition. Let ℙ = { A 1,..., A K } be a partition of Ω with ℙ(A i > 0, then we know from Section B.2 that this could be interpreted as if we know in which set A i the true ω lies. This leads to the following preliminary definition of conditional expectation: Definition B.9. Let ℙ = { A 1,..., A K } be a partition of Ω with ℙ(A i) > 0, then the conditional expectation is given by E [ X | ℙ ] = Σ n = 1 K I { ω ∈ A n } E [ X | A n ] ⁢ (B.9) where I {·} denotes the indicator function. The problem with this definition is that it assumes that each set must have positive probability, which is a unnecessary restriction as we shall see...