Mathematics

Power Series

A power series is an infinite series of the form ∑(aₙxⁿ), where aₙ are coefficients and x is a variable. It represents a function as an infinite polynomial and is used to approximate functions, solve differential equations, and analyze functions in calculus and analysis. The convergence of a power series is determined by its radius of convergence.

Written by Perlego with AI-assistance

8 Key excerpts on "Power Series"

  • Single Variable Calculus
    eBook - ePub

    Single Variable Calculus

    Volume 1: Single Variable Calculus

    • Galina Filipuk, Andrzej Kozłowski(Authors)
    • 2019(Publication Date)
    • De Gruyter
      (Publisher)
    6  Sequences and series of functions
    In this chapter we will consider the problem of approximating functions by polynomials and representing functions by Power Series. We will discuss the Taylor series and Mathematica ® ’s function Series . Next we consider uniform and almost uniform convergence of function sequences and series and conditions for continuity and differentiability of their limits and sums.

    6.1 Power Series continued

    We will now return to the subject of Power Series, which we already introduced in Section 3.10 .
    A Power Series can be viewed as a generalization of a polynomial. Recall that polynomials are just lists of numbers
    (
    a 0
    , ,
    a d
    )
    together with rules for adding and multiplying any two such lists. It is convenient to write polynomials in the form
    a 0
    +
    a 1
    x +
    a 2
    x 2
    + +
    a n
    x n
    , where x is called an indeterminate or a variable. Formal Power Series are defined in exactly the same way, except that we consider infinite rather than just finite sequences. Such a series can be written in the form
    a 0
    +
    a 1
    x +
    a 2
    x 2
    + +
    a n
    x n
    +
    , where x is again a variable. Two such series can also be added and multiplied (the multiplication being given by the Cauchy product).
    One important difference between polynomials and formal Power Series is that a polynomial always defines a function given by substituting numbers for the variable x. However, in the case of formal Power Series the situation is more complicated for although we can “substitute” a number for x, the number series thus obtained may not be convergent (it is always convergent when we substitute 0). To obtain a function we need to find the set of points where the series is convergent.
    Also recall from Section 3.10 that we consider Power Series of the form
    n = 0
    a n
    ( x
    x 0
    )
    n
    , where
    x 0
    is called the center of the series. It is not hard to prove that a Power Series defines a function that is continuous in the interior of its region of convergence. A result by Abel known as Abel’s continuity/limit theorem [14
  • Ordinary Differential Equations
    • Michael D. Greenberg(Author)
    • 2014(Publication Date)
    • Wiley
      (Publisher)
    1 , … are to be determined so that, if possible, (5) satisfies (3). We will proceed rather formally since the purpose of this illustration is only to convey the main ideas.
    We can solve for a 1 , a 2 , … in terms of a 0 , with a 0 remaining arbitrary.
    From (5),
    (6)
    and if we put (5) and (6) into (3), we obtain
    (7)
    or
    (8)
    For (8) to hold, the coefficient of each power of x must be zero, so a 1 + a 0 = 0, 2a 2 + a 1 = 0, 3a 3 + a 2 = 0, and so on. Solving these gives
    (9)
    and so on, in which a 0 remains arbitrary. Finally, putting (9) into (5) gives
    (10)
    which is indeed the same as (4), though in series form and with “a 0 ” in place of A .
    To develop a systematic approach to obtaining series solutions, particularly for the more difficult case of nonconstant-coefficient equations, we begin with a review of Power Series and Taylor series.

    6.2 Power Series AND TAYLOR SERIES

    6.2.1 Power Series. A Power Series about x = x 0 is a series of the form

    (1)
    in which the coefficients a 1 , a 2 , … and the center x 0 are constants. The exponent is the order of the term; for instance, a 3 (x x 0 )3 is the third-order term. Further,
    An
    converges to s means that .
    Since the terms in (1) are functions of x rather than constants, they are different in general from one x to another; for instance, the series
    xn
    is 1 + 1/2 + ··· at x = 1/2, and 1 + 1/10 + ··· at x = 1/10. Consequently, (1) may converge at some points on the x axis and diverge at others. At the least, it converges at the center x 0 because there it reduces to the single term a 0 .
    If it converges at other points as well, that convergence will occur in an interval of convergence |x x 0 | < R centered at x 0 , with the radius of convergence R being either finite or infinite. If R is finite (as in Fig. 1 ), then at each endpoint the series may converge or diverge, but convergence or divergence at the endpoints of the interval of convergence will not be relevant in this chapter, so we will not bring up this matter again. To determine R , we can apply one of the standard convergence tests, such as the ratio test or the n
  • Mathematical Analysis and Proof
    12

    Functions Defined by Power Series

    12.1 Introduction

    We are led naturally to the study of those functions which can be represented as the sums of Power Series for two reasons. Taylor’s Theorem allows us to express a function f as a sum of finitely many terms of the form (x – a )
    n f
    (n )
    (a )/n ! plus a remainder, and we may observe that in some cases the remainder tends to zero as the number of terms increases; this may be thought of as expressing f as a sum of the simpler functions (x a )n . More significantly, we arrive at Power Series by attempting to find functions with desirable properties, usually arising from differential equations. If we wish to find a function equal to its own derivative, then if this function were expressible in the form ∑
    n =0
    a n x n , and if the derivative turned out to equal the expression obtained by differentiating term by term, ∑
    n =1
    na n x
    n −1
    , then the equality of the function and its derivative would be guaranteed if we were to choose the coefficients
    an
    so that these two series were identical, that is,
    an  = a
    0 /n !. Assuming this outline programme is correct and, in particular, that the interchange of the limits involved in taking derivatives of infinite sums can be justified, this yields a simple technique for producing various special functions required in everyday mathematics.
    In what follows we shall consider functions of the form ∑
    n =0
    a n x n where
    an
    does not depend on x and the term a 0 x 0 is understood to mean a 0 . By a simple change of variable this allows us to consider functions of the form ∑ a n (x  − a )n .
    The first thing to recall is the result of Lemma 8.11, that if the Power Series ∑ a n w n converges, then ∑ a n x n converges if |x | < |w
  • Transition to Analysis with Proof
    Chapter 13 Elementary Transcendental Functions 13.1 Power Series Preliminary Remarks When we learn about Power Series, and especially Taylor’s formula, in calculus class we generally come away with the impression that most any function can be expanded in a Power Series. Unfortunately this is not the case. Functions that have convergent Power Series expansions are called real analytic functions, and have many special properties. We shall learn about them in the present section. A series of the form ∑ j = 0 ∞ a j (x - c) j is called a Power Series expanded about the point c. Our first task is to determine the nature of the set on which a Power Series converges. Proposition 13.1 Assume that the Power Series ∑ j = 0 ∞ a j (x - c) j converges at the value x = d with d ≠ c. Let r = | d - c |. Then the series converges uniformly and absolutely on compact subsets of I = { x : | x - c | < r }. Proof : We may take the compact subset of I to be K = [ c - s, c + s ] for some number0 < s < r. For x ∊ K it then holds that ∑ j = 0 ∞ | a j (x - c) j | = ∑ j = 0 ∞ | a j (d - c) j | · | x - c d - c | j In the sum on the right, the first expression in absolute values is bounded by some constant C (by the convergence hypothesis). The quotient in absolute values is majorized by L = s / r < 1. The series on the right is thus dominated by ∑ j = 0 ∞ C · L j. This geometric series converges. By the Weierstrass M -Test, the original series converges absolutely and uniformly on K. □ An immediate consequence of the proposition is that the set on which the Power Series ∑ j = 0 ∞ a j (x - c) j converges is an interval centered about c. We call this set the interval of convergence. The series will converge absolutely and uniformly on compact subsets of the interval of convergence. The radius of the interval of convergence (called the radius of convergence) is defined to be half the length of the interval of convergence
  • Infinite Sequences and Series

    Chapter 4

    Power Series

    4.1.   The circle of convergence

    We have already encountered several times, series of the form Σ
    aν z
    ν , where z has been permitted to be arbitrary to a certain extent. Such series, and, somewhat more generally, series of the form Σa ν (z–z 0 )ν where z 0 is a fixed number, are called Power Series . In what follows, there is usually no loss of generality in considering only Power Series of the first form. For if we set z–z 0 = z ′ for abbreviation, and then drop the accent, the second form goes over into the first.
    Examples of such series were
    The first converges if, and only if, |z |<1, i .e ., in the interior of the unit circle. The third converges for every z , i.e ., “in the entire plane”. Finally, is an example of a Power Series that converges only for z = 0, because, for z ≠ 0, νν z ν = (νz )ν obviously does not tend to 0.
    We shall show, first of all, that every Power Series possesses an analogous convergence behavior, i .e ., that it converges either in the entire plane, or in a certain circle about 0 as center, or only for z = 0. Indeed, we have
    Theorem 1. Let Σ
    aν z
    ν be an arbitrary Power Series, and set .
    Then ,
    a) for a = 0, the series is everywhere convergent ,
    b) for a = + ∞, the series is divergent for every z≠ 0.
    c) If finally , 0<α< + ∞,
    then the series is absolutely convergent for every z with , divergent for every z with |z| > r
    . (The behavior of the series on the circumference |z | = r can then be quite varied; see below.) Thus we have in all three cases, with suitable interpretation ,
    PROOFS . a) α = 0 means that , because . Hence, if z is an arbitrary number, then also . The assertion now follows from the radical test.
    b) Let a = + ∞ and z ≠ 0, so that |z | > 0. Then, according to 2.2 ,5 , or infinitely often, and consequently Σ
    aν z
    ν is divergent.
    c) In this case, let z be an arbitrary, but henceforth fixed, number, with . Choose a positive number ρ for which , and hence
  • Complex Variables
    eBook - ePub
    5

    Analytic Functions andPower Series

    Section 5.1 SEQUENCES AND SERIES

    5.1.0 Introduction

    In Chapter 4 we developed the machinery of complex line integration, used this to prove the Cauchy Integral Theorem and Formula, and at last derived from these a host of marvelous consequences: infinite differentiability of analytic functions, the Maximum Modulus Principle, Liouville’s Theorem, and so on. The pattern of the present chapter will be similar. We will first develop the general machinery of sequences and series. It is possible you have seen some of this already. Then we will establish Taylor’s and Laurent’s Theorems. These will assure us that the functions we are interested in may be written as so-called Power Series. Thus, for example, we will see that the familiar exponential function may be written
    Finally, we will gain further insight into the nature of analytic functions by examining the related Power Series. Be encouraged that this gain in insight will justify our initial technical labors.
    Let us first mention a major technical question in an informal way. Suppose we are given a Power Series , an infinite formal sum:
    This is an “infinitely long polynomial” in the variable z with given coefficients a 0 , a 1 , a 2 , a 3 , . What is the result of “plugging in” a particular complex number for z , say, z = z 1 ? While we know what it means to add two or three or any finite string of complex numbers, we must make some careful definitions before an infinite sum makes sense. Let us turn to this now.

    5.1.1 Series of Real Numbers

    Complex Power Series will be discussed in terms of series of complex numbers. In turn, series of complex numbers will be treated in terms of series of real numbers. We will recall here some of the basic facts about real series (essentially one definition and one theorem). All subsequent theory will depend on these.
    A real series is an infinite formal sum
    with each term u k in
  • Multivariable Calculus
    • L. Corwin(Author)
    • 2017(Publication Date)
    • CRC Press
      (Publisher)
    17 Infinite Series of Functions We now turn to the study of infinite series of functions. The most obvious notion of convergence for a series of functions is pointwise convergence. However, this turns out to be less important than another sort of convergence, uniform convergence. We define this new notion, use it to prove a number of theorems, and apply these theorems to the study of Power Series. We conclude with a brief introduction to Fourier series. These are sums of sine and cosine functions; they are important in both pure and applied mathematics. 1. Uniform Convergence We begin by formally defining the most obvious notion of convergence for a series of functions. (There is a similar definition for sequences.) Suppose that U is a subset of R n, and let { f j } be a sequence of functions defined from U to R. We say that ∑ j = 1 ∞ f j converges pointwise to f (or that ∑ j = 1 ∞ f j = f) if for every point v ∈ U, ∑ j = 1 ∞ f j ⁢ (v) = f (v). Although this definition of convergence for functions probably seems the most natural, it has some disadvantages; the limit function f may be quite badly behaved even though the functions f n are smooth. Here is one example (others will follow). Let f j (x) = x j − x j − 1 x ∈ [ 0, 1 ] Each function f j is, of course, continuous. Now let s n (x) = ∑ j = 1 n f j (x) It is easy to check by induction that s n (x) = x − x n + 1 When 0 ≤ x < 1, x n + 1 approaches 0 as n approaches ∞, and therefore S n (x) approaches x. When x = 1, however, s n (x) = 0 for all n, and s n (x) approaches 0. Therefore ∑ j = 1 ∞ f j (x) = f (x) where f (x) = { x 0 if 0 ≤ x < 1 if x = 1 That is, the pointwise sum of the continuous functions f j is discontinuous-an awkward state of affairs. To rectify matters, we need another definition. Let { g n } be a sequence of real- (or complex-) valued functions defined on the set U
  • Infinite Series in a History of Analysis
    1.4 ).
    Indeed, for every choice of c 0 , c 1 , the recursion (9 ) generates a Power Series that converges on all of . If we take c 0 = 1, c 1 = 0 then c k = 0 for all odd values of k whereas
    And indeed: is a solution. The theory says that the multitude of all solutions on , or on appropriate sub-domains, is furnished in terms of y (x) = c 0 cos x + c 1 sin x with whatever
    The heuristics above is going by the name of Power Series method
    {4a }
    , with “method” in the sense of suggested setup, pattern, where even the German “Potenzreihen-Ansatz” is adopted as “ansatz ”'
    {4b }
    . Of course, the method does not work in case there is no solution that allows for a Power Series expansion.
    * * *
    There were Taylor series before and long before Taylor ({5a }; 3.2.1 ). James Gregory detected “Taylor’s theorem”
    {5b }
    and presumably had deeper insight. Since a concept of function was missing in those days, it was commonplace to suppose “every function” to hold its Taylor expansion. Having been sure to have this verified
    {2d , 6a }
    , the great Lagrange suggested access to the differential quotient through a bypass that would dispense with the odious visions due to Newton and Leibniz: In view of (5 ) and (7 ), he conceived f '(x 0 ) to be the coefficient a 1 in the Power Series expansion of f about x 0 – mistakenly considered a matter of algebra.
    {2e , 5c , 6b }
    However, Lagrange truly was the pioneer to engage in overcoming the obscurities of initial calculus. His was the first step to validate Taylor expansion: When inquiring into what is left when approaching a function by the partial sums of its Taylor series, Lagrange
    {2f , 6c , 7 }
    provided a representation of the remainder in order to enable numerical control of approximation.
    Lagrange excelled in his contributions to the theory of variations.
    {2g , 5d }
    W.C. Dampier held him to be “perhaps the greatest mathematician of the century”
    {8a }
    , thus belittling the achievements of Euler to whom the distinction is attributed by mathematicians and historians
    {6d , 8b }
    . Pierre Simon Laplace, the master of celestial mechanics, kept saying “Read Euler, read Euler, he is the master of us
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.