Mathematics

Combining Random Variables

Combining random variables involves finding the probability distribution of a new random variable created by combining two or more existing random variables. This can be done through operations such as addition, subtraction, multiplication, or division. The resulting distribution is determined by the specific method of combination and the properties of the original random variables.

Written by Perlego with AI-assistance

6 Key excerpts on "Combining Random Variables"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • Introductory Probability and Statistics
    eBook - ePub

    Introductory Probability and Statistics

    Applications for Forestry and Natural Sciences (Revised Edition)

    • Robert Kozak, Antal Kozak, Christina Staudhammer, Susan Watts(Authors)
    • 2019(Publication Date)

    ...4 Random Variables and Probability Distributions Outcomes of Random Experiments Like the theory of probability, random variables and their probability distributions play an important role in statistical inference. The main objectives of this chapter are to show how outcomes of random experiments can be described in real (numerical) terms and how probabilities can be assigned to these real numbers. Numerical descriptions of outcomes and their respective probabilities form what are known as probability distributions or probability density functions. We can use these distributions to compute the means and the variances of the random variables that they describe. All of these tools are useful in helping to provide further information for describing populations. 4.1    Random Variables In Chapter 3 (this volume), we discussed the concepts of random experiments, sample spaces and outcomes. Some random experiments produce outcomes that can be described by letters, symbols or just general descriptions. Other experiments produce outcomes in numerical terms, such as: the number of heads that could occur when a coin is tossed three times; the total number of dots observed when rolling a pair of dice; the number of plants in a 100 m 2 area; or the number of seeds that germinate in a seedbed. A random variable is a well-defined numerical description of the outcomes in the sample space of a random experiment. We will denote random variables by capital letters, such as X, Y or Z, while small letters, such as x, y or z (usually with subscripts), will denote individual values or outcomes for that random variable. A sample space associated with a random ex periment can be classified as discrete or con tinuous. A discrete sample space is one that contains a finite number of elements, such as the eight possible outcomes from tossing a coin three times...

  • Modeling and Analysis of Local Area Networks
    • Paul J. Fortier(Author)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)

    ...This follows from the discussions of sets of events earlier in this chapter. Since the union of the events A and B yields all sample points in A and B considered together, the sum of the events separately will yield the same quantity plus an extra element for each element in the intersection of the two events. Thus, we must subtract the intersection to form the equality, hence, Equation (4-25). Note that Equation (4-26) is essentially an extended version of Equation (4-15) where AB does not equal the null set. RANDOM VARIABLES Thus far, we have been discussing experiments, along with their associated event space, in the context of the probabilities of occurrence of the events. We will now move onto a topic of great importance, which relates the basic probability measures to real world quantities. The concept of a random variable relates the probabilities of the outcomes of an experiment to a range or set of numbers. A random variable, then, is defined as a function whose input values are the events of the sample space and whose outcome is a real number. For example, we could have an experiment in which the outcome is the length of each message that arrives over a communication line. A random variable defined on this experiment could be the number of messages that equaled a certain character count. Often, we want to consider a range of values of the random variables; for instance, the range of messages greater than X 1. This is denoted here as { X ≤ x 1 } where X denotes the random variable and x 1 is a value. We may call this set the event where the random variable X yields a value greater than x 1. Continuing with the previous example, suppose that we had the following outcomes from the message length experiment. The random variable defined by the number of times the message length of 500 is seen is equal to message 2. The event { X > 2000} contains the outcomes of messages 1, 4, 5, and 6. Random variables may be either discrete of continuous...

  • Bayesian Thinking in Biostatistics
    • Gary L Rosner, Purushottam W. Laud, Wesley O. Johnson(Authors)
    • 2021(Publication Date)

    ...See the description given at the end of Section A.2.1. Marginal Distributions. The joint distribution of two random variables X and Y determines the separate individual distributions of X and of Y. These distributions are called marginal distributions. A marginal distribution fully describes the probability structure of the random variable by itself, without reference to the other. The relevant expressions, with subscripts used on functions to clarify the random variable to which the marginal belongs,. are: F X (x) = F (x, ∞), F Y (y) = F (∞, y) ; f X (x) = ∫ − ∞ ∞ f (x, y) d y, f Y (y) = ∫ − ∞ ∞ f (x, y) d x ; f X (x) = ∑ a l l y f (x, y),[--=PLGO-S. EPARATOR=--]f Y (y) = ∑ a l l x f (x, y). Conditional Distributions. These distributions play a crucial role in Bayesian statistics. They describe how the probability structure of one random variable is affected by knowledge of the other. We use the notation X | Y = y to denote the random variable X when Y is known to take the value y. The distribution of this conditional random variable is determined by the joint distribution. The relationships are given by the following expressions: f X | Y = y (x) = f (x, y) f Y (y), f Y | X = x (y) = f (x, y) f X (x). These relationships hold for pmfs as well as pdfs. Of course, the denominators on the right-hand sides must be positive; otherwise, the conditional density is undefined. Law of Total Probability. For two random variables, f (y) = ∫ − ∞ ∞ f (y | x) f (x) d x. This follows from the relationship between the joint distribution and the marginal and conditional distributions stated above. Bayes’ Theorem. For two random variables, f (x | y) = f (y | x) f (x) ∫ − ∞ ∞ f (y | x) f (x) d x. A.2.3 Multivariate Random Variables Extending the development of bivariate random variables to the case of more than two. variables is straightforward. We state some of the above results for k joint random variables X 1, …, X k...

  • An Introduction to Financial Mathematics
    eBook - ePub

    ...For example, to prove that X/Y is a random variable, apply the theorem to the continuous function f (x, y) = x/y, y ≠ 0. The assertions of Corollary 3.2.2 regarding the sum and product of two random variables clearly extend to arbitrarily many random variables. Thus from Proposition 3.1.3 we have 3.2.3 Corollary. A linear combination ∑ j = 1 n c j 1 A j of indicator functions 1 A j with A j ∈ ℱ is a random variable. 3.2.4 Corollary. If max(x, y) and min(x, y) denote, respectively, the larger and smaller of the real numbers x and y, and if X and Y are random variables, then max(X, Y) and min(X, Y) are random variables. Proof. The identity max (x, y) = y + (| x − y | + x − y) / 2 shows that max(x, y) is continuous. Therefore, by Theorem 3.2.1, max(X, Y) is a random variable. A similar argument shows that min(X, Y) is a random variable (or one can use the identity min(x, y) = −max(– x, – y)). The Cumulative Distribution Function The cumulative distribution function (cdf) of a random variable X is defined by F x (x) = ℙ (X ≤ x), x ∈ R. A cdf can be useful in expressing certain probabilistic relations, as in the characterization of independent random variables described later. 3.2.5 Example. The cdf of the number X of heads that come up in three tosses of a fair coin is given. by F x (x) = { 0 if x < 0, 1 / 8 if 0 ≤ x < 1 (0 heads), 1 / 2 if 1 ≤ x < 2 (0 or 1 head), 7 / 8 if 2 ≤ x < 3 (0, 1, or 2 heads), 1 if x ≥ 3 (0, 1, 2, or 3 heads). The function may also be described by a linear. combination of indicator functions: F x = 1 8 1 [ 0, 1) + 1 2 1 [ 1, 2) + 7 8 1 [ 2, 3) + 1 [ 3, ∞). 3.3    Discrete Random Variables A random variable X on a probability space (Ω, ℱ, ℙ) is said to be discrete if the range of X is countable...

  • A User's Guide to Business Analytics

    ...6 Random V ariables and Probability Distributions So far we have described the probability of events which are elements of a sample space in connection with a statistical experiment. In statistics, however, whenever we deal with probabilities, it is in relation to the possible values that a random variable can assume. So what is a random variable? To explain this concept, let us start with a simple probability experiment. Suppose we have a fair coin and we toss this coin three times in succession. Denoting the possible events of Head and Tail in each toss by H and T, the sample space of the entire experiment may be enumerated as S = { HHH;HHT;HTH;HTT; THH; THT; TTH; TTT } As the coin is a fair one, each of these eight simple events has a probability of 1/8. Now, instead of finding the probabilities of each of the simple events in the sample space S separately, let us view the problem in a different way. Let us define a new variable X, which represents the number of heads in each of the simple events, and suppose our questions are now rephrased in terms of values of the variable X. For example, we may want to know the value of the probability P r (X = 2). Since, according to our construction, each of the simple events { HHT }, { HTH } and { THH } corresponds to the event (X = 2), the required probability in this case is 1/8 + 1/8 + 1/8 = 3/8; similarly, P r (X = 0) = 1 / 8, P r (X = 1) = 3 / 8 and P r (X = 3) = 1 / 8. Note that the variable X associates a real value with every element of the sample space. In the above example, the association induced by X can be represented as follows: HHH → 3 HHT → 2 HTH → 2 HTT → 1 THH → 2 THT → 1 TTH → 1 TTT → 0. This X is a random variable. Formally, a random variable is an association which matches every element in the sample space with a real value...

  • Probability in Petroleum and Environmental Engineering
    • George V Chilingar, Leonid F. Khilyuk, Herman H. Reike(Authors)
    • 2012(Publication Date)

    ...CHAPTER 7 RANDOM VARIABLES AND DISTRIBUTION FUNCTIONS QUANTITIES DEPENDING ON RANDOM EVENTS Many physical experiments consist of measuring and recording variables. For example, one can measure velocity and coordinates of a moving body, density of a substance, number of accidents in a city, the rate of water flow, concentration of contaminants in some medium, etc. Variables measured in stochastic experiments depend usually on some random events. Such variables are called random. Several examples of random variables are presented below. Example 7.1. Anthropological measurements Suppose that one measures the height of a person. The result of the measurement is defined by an outcome of a stochastic experiment, an individual chosen at random. Example 7.2. Quality control A set of samples is chosen from some lot of articles, and all articles from this set are checked for quality. The number of defective articles in the sample set is a random variable that’s value is determined mainly by the number of defective articles in the whole lot of articles (and the total number of articles in the lot). The value is determined by a set of factors including various random events related to the manufacturing process. Example 7.3. Brownian motion Brownian motion is the movement of microscopic particles (both organic and inorganic) dispersed in water or otherfluid. Assume that during Brownian movement, a particle is located at point x 0 at the moment of time t 0 and at point x 1 at the moment of time t 1 (t 1 > t 0). During time t 1 – t 0, the particle is displaced by the distance | x 1 – x 0 | depending on the direction of movement of the particle. The trajectory of the particle is determined by the impacts of molecules of the liquid surrounding the particle. Moments of time and magnitude of impacts are random in this experiment...