Mathematics

Sum of Independent Random Variables

The sum of independent random variables refers to the result of adding together two or more random variables that are statistically independent of each other. In probability theory, the sum of independent random variables has a well-defined distribution, and its properties can be analyzed using techniques such as convolution. This concept is fundamental in understanding the behavior of random processes and systems.

Written by Perlego with AI-assistance

8 Key excerpts on "Sum of Independent Random Variables"

  • Probability and Random Processes
    eBook - ePub

    Probability and Random Processes

    With Applications to Signal Processing and Communications

    • Scott Miller, Donald Childers(Authors)
    • 2012(Publication Date)
    • Academic Press
      (Publisher)
    Figure 7.5 Estimates of failure probabilities along with confidence intervals. The solid line is the true probability while the circles represent the estimates.

    7.6 Random Sums of Random Variables

    The sums of random variables considered up to this point have always had a fixed number of terms. Occasionally, one also encounters sums of random variables where the number of terms in the sum is also random. For example, a node in a communication network may queue packets of variable length while they are waiting to be transmitted. The number of bytes in each packet, X i , might be random as well as the number of packets in the queue at any given time, N . The total number of bytes stored in the queue would then be a random sum of the form
    (7.43)
    Theorem 7.4 : Given a set of IID random variables
    Xi
    with mean
    μX
    and variance σ2
    X
    and an independent random integer N , the mean and variance of the random sum of the form given in Equation (7.43) are given by
    (7.44)
    (7.45)
    Proof : To calculate the statistics of S , it is easier to first condition on N and then average the resulting conditional statistics with respect to N. To start with, consider the mean:
    (7.46)
    The variance is found following a similar procedure. The second moment of S is found according to
    (7.47)
    Note that
    Xi
    and
    Xj
    are uncorrelated unless i= j. Therefore, this expected value works out to be
    (7.48)
    Finally, using Var(S ) = E [s 2 ]-(E[S]) 2 results in
    (7.49)
    One could also derive formulas for higher order moments in a similar manner.
    Theorem 7.5 : Given a set of IID random variables
    Xi
    with a characteristic function
    ΦX
    (ω) and an independent random integer N with a probability generating function
    HN
    (z ), the characteristic function of the random sum of the form given in Equation (7.43) is given by
    (7.50)
    Proof : Following a derivation similar to the last theorem,
    (7.51)
    Example 7.19
    Suppose the
    Xi
    are Gaussian random variables with zero mean and unit variance and N
  • Probability Theory and Mathematical Statistics for Engineers
    • V. S. Pugachev(Author)
    • 2014(Publication Date)
    • Pergamon
      (Publisher)
    Section 1.2.1 an intuitive definition of a random variable was given based on experimentally observable facts, and it was shown that with every random variable may be connected some events, its occurrences in different sets. For studying random variables it is necessary that the probabilities be determined for some set of such events, i.e. that this set of events belongs to the field of events δ connected with a trial. Furthermore, it is expedient to require that this set of events be itself a field of events (a subfield of the field δ). Thus we come to the following definition of a random variable.
    A random variable is a variable which assumes, as a result of a trial, one and only one of the set of possible values and with which is connected some field of events representing its occurrences in given sets, contained in the main field of events δ.

    2.1.2 Scalar and vector random variables

    Random variables may be both scalar and vector. In correspondence with general definition of a vector we shall call a vector random variable or a random vector any ordered set of scalar random variables. Thus, for instance, an n -dimensional random vector X is a set of n scalar random variables X 1 ,…, X n . These random variables X 1 ,…, X n are called the components of the random vector X .
    In a general case the components of a random vector may be complex random variables (assuming complex numerical values as a result of a trial). But we may always get rid of complex variables by replacing every complex variable by a pair of real variables, namely by its real and imaginary parts. Thus an n -dimensional vector with complex components may always be considered a 2n -dimensional vector with real components. However, it is not always profitable. In many problems it is more convenient to consider complex random variables. Later on for brevity we shall call a vector with complex components a complex vector and a vector with real components a real vector .
    Instead of a random vector we may evidently consider a random point
  • Integration, Measure and Probability
    1 ) defined by
    and we can therefore say that x 1 is a random variable in 1 with probability distribution 1 . In the same way, x 2 …, x k are random variables in 2 , …, k , with distributions 2 , …, k , respectively.
    If is the product measure of measures in 1 , …, k , these measures must clearly be 1 , 2 , …, k , and we say then that the random variables x 1 … , x k are independent . Otherwise, they are called dependent . The number of variables is assumed here to be finite. The extension to infinite sets of variables involves deeper ideas and is postponed until Chapter 5 .
    There is no difficulty in giving examples of independent or dependent variables. The probability distribution over the unit square
    in which (X ) for a Borel set is its ordinary Lebesgue measure clearly satisfies the condition for independence. On the other hand, the random variables x 1 , x 2 which can take only values 0 and 1 are not independent if their joint probability distribution is defined by
    We have seen in Section 7 that if a function y = (x ) maps on to , the sets Y of which are inverse images of sets X , which are measurable with respect to a probability measure in , form a -ring in , and we get a probability distribution υ in by defining υ(Y ) = (X ) for the (υ-measurable) sets Y for which Y = −1 (X ) and X is -measurable. We therefore speak of a random variable y as the function (x ) of a random variable x
  • Biometry for Forestry and Environmental Data
    eBook - ePub
    2   Random Variables  
      2.1  Introduction to random variables
    Random variables are variables that can exhibit different values depending on the outcome of a random process. The value assigned to a random variable due to a specific outcome of an experiment is called realization. The random variable itself cannot be observed, but the realized value can be. However, realization of a random variable does not include all the information of the properties of the random variable. Furthermore, several realizations include more information than just a single one. A variable that is not random is fixed.
    Terms such as measurement and observation are often used synonymously with realization. However, ‘observation’ can also mean the unit or the subject from which we make measurements, and we can measure several variables for each unit. Measurements of a given property of a unit are called variables. For example, trees of a certain area may be the units selected for measurements of diameter, height, species, volume and biomass. Some quantities cannot be measured for practical or theoretical reasons, and they are called latent variables.
    Whenever the distinction between the random variable and the realized/observed value is explicitly shown, a common practice is to denote the random variable by an uppercase letter and the realized value by a lowercase letter. For example, X = x and Y = 2 means that values x and 2 were assigned to random variables X and Y, respectively, by some random processes.
    2.1.1  Sources of randomness
    An intuitive reason for the randomness may be sampling: the unit was selected from a population of units randomly, and therefore any characteristic that one observes from that particular unit is affected by the unit selected. Another way of thinking is to assume that there is a certain, usually unknown random process, which generated the value for the unit of interest.
  • Probability
    eBook - ePub

    Probability

    With Applications and R

    • Robert P. Dobrow(Author)
    • 2013(Publication Date)
    • Wiley
      (Publisher)
    g ,
    (4.4)    
    Letting f and g be the identity function gives
    (4.5)    
    The expectation of a product of independent random variables is the product of their expectations. If X and Y are independent,
    The general result follows similarly.
     
      Example 4.15 Random cone. Suppose the radius R and height H of a cone are independent and each uniformly distributed on {1,…, 10}. Find the expected volume of the cone.
    The volume of a cone is given by the formula . Let V be the volume of the cone. Then and
    where the third equality uses the independence of R 2 and H , which is a consequence of the independence of R and H .
    R: EXPECTED VOLUME
    The expectation is simulated with the commands > simlist <- replicate(100000, (pi/3)*sample(1:10,1)^2*sample(1:10,1)) > mean(simlist) [1] 220.9826
      

    4.4.1 Sums of Independent Random Variables

    Sums of random variables figure prominently in probability and statistics. To find probabilities of the form P (X + Y = k ), observe that X + Y = k if and only if X = i and Y = k − i for some i . This gives
    (4.6)    
    If X and Y are independent, then
    (4.7)    
    The limits of the sum ∑i will depend on the possible values of X and Y . For instance, if X and Y are nonnegative integers, then
     
      Example 4.16 A nationwide survey collected data on TV usage in the United States The distribution of U.S. households by number of TVs per household is given in Table 4.3 . If two households are selected at random, find the probability that there are two TVs in both houses combined.
    TABLE 4.3:  Distribution of U.S. households by number of TVs.
    Let T 1 and T 2 be the number of TVs in the two households, respectively. Then,
      
     
      Example 4.17 During rush hour, the number of minivans M on a fixed stretch of highway has a Poisson distribution with parameter
    λM
    . The number of sports cars S on the same stretch has a Poisson distribution with parameter
    λs
    . If the number of minivans and sports cars is independent, find the probability mass function of the total number of vehicles M + S
  • New Statistical Procedures for the Social Sciences
    eBook - ePub

    New Statistical Procedures for the Social Sciences

    Modern Solutions To Basic Problems

    CHAPTER THREE RANDOM VARIABLES, DISTRIBUTIONS AND ESTIMATION

    3.1 RANDOM VARIABLES

    In probability theory the sample space can consist of any collection of objects or events that are of interest. However, when trying to develop statistical techniques, it is convenient to restrict attention to sample spaces containing only numbers. Fortunately, for the problems considered in this book, assigning numbers to outcomes is a simple process. In many cases the numbers used to describe outcomes suggest themselves in an obvious way. For example, when measuring how much people weigh, a pointer on the scale might indicate the symbol “120,” and so you would use 120 as the person's weight. In other cases numbers can be assigned in an arbitrary but convenient fashion that is appropriate for the problem at hand. For instance, if you want to conduct a study that deals with whether students pass (P) or fail (F) a statistics course, it turns out to be convenient to designate a “pass” with the number 1, and a “fail” with the number 0. The reader might not be completely convinced that assigning numbers to outcomes is always a simple matter, but eventually this will become clear.
    Definition 3.1.1. A random variable, X, is a rule (or function) that assigns a unique number to any element of the sample space . The set of possible values of the random variable X is called the space of X.
    Example 3.1.1. Consider again the situation where students either pass or fail a course. Then the random variable X is defined by
    and
    However, it is cumbersome to write X(P) and X(F), and so it is more common to simply say that X equals 1 or 0 according to whether a student passes or fails. The only reason for writing X(P)=1 was to emphasize that X is a function that assigns the number 1 to the event P.
    From chapter two
  • Essentials of Probability Theory for Statisticians
    5 are independent, we can write the event
    X 1
    B 1
    X 2
    B 2
    X 3
    B 3
    ( 4.9 )
    involving the subcollection X 1 , X 2 , and X 3 as
    X 1
    B 1
    X 2
    B 2
    X 3
    B 3
    X 4
    R
    X 5
    R ,
    ( 4.10 )
    and the probability of Expression (4.10) is
    P
    (
    X 1
    B 1
    X 2
    B 2
    X 3
    B 3
    )
    ( 1 )
    ( 1 )
    . Therefore, the probability of Expression (4.9) is a product of its individual event probabilities if and only if the probability of Expression (4.10) is a product of its individual event probabilities. This motivates the following definition.
    Definition 4.36. Independence of random variables
    1. A finite collection of random variables X 1 , ...,
      Xn is defined to be independent if
      P
      (
      X 1
      B 1
      , ... ,
      X n
      B n
      )
      =
      i = 1
      n
      P
      (
      X i
      B i
      )
      for all one-dimensional Borel sets B
      1 , ...,
      Bn .
    2. A countably or uncountably infinite collection of random variables is defined to be independent if each finite subcollection satisfies condition 1.
    It is helpful to have another way of checking for independence because Definition 4.36 requires consideration of all Borel sets. The following is a very useful result.
    Proposition 4.37. Independence distribution function factors
    X 1 , ...,
    Xn are independent by Definition 4.36 if and only if
    P (
    X 1
    x 1
    ,
    X 2
    x 2
    , ... ,
    X n
    x n
    ) =
  • Elements of Stochastic Dynamics
    • Guo-Qiang Cai, Wei-Qiu Zhu(Authors)
    • 2016(Publication Date)
    • WSPC
      (Publisher)
    ω ∈ Ω. We have the
    A random variable X (ω ), ω ∈ Ω, is a function defined on a sample space Ω, such that for every real number x there exists a probability that X (ω ) ≤ x , denoted by Prob[ω : X (ω ) ≤ x ].
    For simplicity, the argument ω in X (ω ) of a random variable is usually omitted, and the probability can be written as Prob[X x ].
    There are two types of random variables: discrete and continuous. A discrete random variable takes only a countable number, finite or infinite, of distinct values. On the other hand, the sample space of a continuous random variable is an uncountable continuous space. following definition:
    A random variable can be either a single-valued quantity or an n -dimensional vector described by n values. Except specified otherwise, we assume implicitly that a random variable is a singlevalued quantity in the book.

    2.2Probability Distribution

    Since a random variable is uncertain, we can describe it in terms of probability measure. For a discrete random variable, the simplest and direct way is to specify its probability to take a possible discrete value, written as
    In the notation
    PX
    (x ), X is the random variable, and x is the state variable, i.e., the possible value of X . The convention of using a capital letter to denote a random quantity and a low case letter to represent its corresponding state variable will be followed throughout the book.
    Another probability measure to describe a random variable is known as the probability distribution function, denoted as
    FX
    (x
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.