Mathematics

Transforming Random Variables

Transforming random variables involves changing the probability distribution of a random variable through a function. This process allows for the analysis of the transformed variable in terms of its new distribution and properties. Common transformations include linear transformations, logarithmic transformations, and exponential transformations, each of which can provide valuable insights into the behavior of the random variable.

Written by Perlego with AI-assistance

3 Key excerpts on "Transforming Random Variables"

  • An Introduction to Probability and Statistical Inference
    • George G. Roussas(Author)
    • 2003(Publication Date)
    • Academic Press
      (Publisher)
    Chapter 6

    Transformation of Random Variables

    This chapter is devoted to transforming a given set of r.v.’s to another set of r.v.’s. The practical need for such transformations will become apparent by means of concrete examples to be cited and/or discussed. The chapter consists of five sections. In the first section, a single r.v. is transformed into another single r.v. In the following section, the number of available r.v.’s is at least two, and they are to be transformed into another set of r.v.’s of the same or smaller number. Two specific applications produce two new distributions, the t -distribution and the F -distribution, which are of great applicability in statistics. A brief account of specific kinds of transformations is given in the subsequent two sections, and the chapter is concluded with a section on order statistics.

    6.1. Transforming a Single Random Variable

    EXAMPLE 1
    Suppose that the r.v.’s X and Y represent the temperature in a certain locality measured in degrees Celsius and Fahrenheit, respectively. Then it is known that X and Y are related as follows: Y = X + 32.
    This simple example illustrates the need for transforming a r.v. X into another r.v. Y , if Celsius degrees are to be transformed into Fahrenheit degrees.
    EXAMPLE 2
    As another example, let the r.v. X denote the velocity of a molecule of mass m . Then it is known that the kinetic energy of the molecule is a r.v. Y related to X in the following manner: Y = mX 2 .
    Thus, determining the distribution of the kinetic energy of the molecule involves transforming the r.v. X as indicated above.
    The formulation of the general problem is as follows: Let X be a r.v. of the continuous type with p.d.f.
    fX ,
    and let h be a real-valued function defined on ℜ. Define the r.v. Y by Y = h (X ) and determine its p.d.f.
    fY .
    Under suitable regularity conditions, this problem can be resolved in two ways. One is to determine first the d.f.
    FY
    and then obtain
    fY
    by differentiation, and the other is to obtain
    fY
  • Probability, Statistics, and Stochastic Processes
    • Peter Olofsson, Mikael Andersson(Authors)
    • 2012(Publication Date)
    • Wiley
      (Publisher)
    Chapter 2 Random Variables

    2.1 Introduction

    We saw in the previous chapter that many random experiments have numerical outcomes. Even if the outcome itself is not numerical, such as the case is Example 1.4, where a coin is flipped twice, we often consider events that can be described in terms of numbers, for example, {the number of heads equals 2}. It would be convenient to have some mathematical notation to avoid the need to spell out all events in words. For example, instead of writing {the number of heads equals 1} and {the number of heads equals 2}, we could start by denoting the number of heads by X and consider the events {X = 1} and {X = 2}. The quantity X is then something whose value is not known before the experiment but becomes known after.
    Definition 2.1. A random variable is a real-valued variable that gets its value from a random experiment.
    There is a more formal definition that defines a random variable as a real-valued function on the sample space. If X denotes the number of heads in two coin flips, we would thus, for example, have X (HH ) = 2. In a more advanced treatment of probability theory, this formal definition is necessary, but for our purposes, Definition 2.1 is enough.
    A random variable X is thus something that does not have a value until after the experiment. Before the experiment, we can only describe the set of possible values, that is, the range of X and the associated probabilities. Let us look at a simple example.
    Example 2.1. Flip a coin twice and let X denote the number of heads. Then X has range {0, 1, 2} and the associated probabilities are
    and we refer to these probabilities as the distribution of X .
    In the last example, any three numbers between 0 and 1 that sum to 1 is a possible distribution (recall Section 1.4), and the particular choice in the example indicates that the coin is fair. Let us next restate some of the examples from Section 1.2 in terms of random variables. In each case, we define X
  • Handbook of Monte Carlo Methods
    • Dirk P. Kroese, Thomas Taimre, Zdravko I. Botev(Authors)
    • 2013(Publication Date)
    • Wiley
      (Publisher)
    Many common distributions and families of distributions are related to each other via simple transformations. Such relations lead to general rules for generating random variables. For example, generating random variables from any location–scale family of distributions can be carried out by generating random variables from the base distribution of the family, followed by an affine transformation. A selection of common transformations is discussed in Section 3.1.2. Universal procedures for generating random variables include the inverse-transform method (Section 3.1.1), the alias method (Section 3.1.4), the composition method (Section 3.1.2.6), and the acceptance–rejection method (Section 3.1.5).
    Remark 3.1.1 (Computing With Finite Precision)
    In our generation algorithms we will assume that we have a source for generating perfect iid ∪(0, 1) random variables, and that our computing device can manipulate real numbers to infinite precision. However, usually neither of these assumptions is satisfied in practice due to the limitations of finite precision computation.
    In computer implementations it is common to use floating-point arithmetic. Floating point numbers (in the IEEE 754-2008 standard) are represented by binary vectors of length d , where d = 32 is called single precision, d = 64 is called double precision, and d = 128 is called quadruple precision. These binary vectors are ordered as
    where s is the sign, (e 1 ,…,
    ep
    ) is the biased exponent, and (m 1 ,…,
    mq
    ) is the mantissa (or trailing significant field), and represent the numbers given by
    where b = 2p –1 – 1 is called the bias. sNaN and
    qNaN
    represent two fictitious “numbers” different from ±∞. The values of p and q for the different precisions are given in the following table.
    A practical consequence of this lack of arbitrary precision is that one can inadvertently map a large proportion of uniform random numbers to a single point mass using an absolutely continuous distribution. As an example [11], consider sampling X ~ Beta(l, 0.01) via the inverse-transform method (Section 3.1.1) — which yields X = 1 – (1 – U )100 . Suppose we use the IEEE 754 standard for double precision floating point numbers and round to the nearest floating point number. In this case, all numbers X (1 – 2–52 , 1] will be mapped to the floating point 1. Thus, all U (1–2–13 /25 , 1] will be mapped to 1. Thus a proportion 2–13 /25
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.