Mathematics

Taylor Polynomials

Taylor polynomials are approximations of functions using a finite number of terms from their Taylor series. They are used to estimate the value of a function at a particular point or to approximate the behavior of a function near a certain point. The accuracy of the approximation depends on the number of terms used in the polynomial.

Written by Perlego with AI-assistance

4 Key excerpts on "Taylor Polynomials"

  • Single Variable Calculus
    eBook - ePub

    Single Variable Calculus

    Volume 1: Single Variable Calculus

    • Galina Filipuk, Andrzej Kozłowski(Authors)
    • 2019(Publication Date)
    • De Gruyter
      (Publisher)
    Still, it can be very useful for solving certain problems. To obtain global information about the approximation of a function by its Taylor Polynomials we need a different form for the remainder. There are many of them but the best-known one is Lagrange’s form. It requires stronger assumptions on the function f. Theorem 19 (The Lagrange Remainder Terms Theorem [ 14, Theorem 8.44]). Let f be an (n + 1) times differentiable function in the interval (a, b) and suppose that the n-th derivative of f is continuous at a and b. Then there exists some c ∈ (a, b) such that f (x) = T n (f) (x) + f (n + 1) (c) (n + 1) ! (x − x 0) n + 1. Note that for n = 0 this gives the Mean Value Theorem. In general we know nothing about the value of c except that it lies in the interval (a, b). However, this information is often sufficient to obtain a global estimate of the quality of the Taylor approximation. A very useful consequence is the following statement. Corollary 1. Let f be as in Theorem 19 and suppose that | f n + 1 (x) | ≤ M for all x ∈ (a, b). Then | f (x) − T n (f) (x) | ≤ M (b − a) n + 1 (n + 1) ! on the whole interval (a, b). Note that the right hand side tends to 0 as n → ∞. If f is differentiable infinitely many times and if the result above holds for every positive integer n, then the Taylor Polynomials are the partial sums of series that converges to f on (a, b). For example, if f (x) = sin (x), then f (n) (x) is bounded on the entire real line for all n and we see that f is the sum of its Taylor series. Since f was defined as the sum of a convergent power series in Section 3.10 and we know that such a representation must be unique, the Taylor series and the power series centered at 0 which defines the function sin must be the same. Exactly the same is true for the function f (x) = exp (x). Even though its derivatives are not bounded globally (as in the case of sin or cos) they are still bounded by the same constant for every n on (a, b)
  • Geometry for Programmers
    • Oleksandr Kaleniuk(Author)
    • 2023(Publication Date)
    • Manning
      (Publisher)

    6 Polynomial approximation and interpolation

    This chapter covers
    • Understanding polynomials and their properties
    • Using polynomial interpolation and approximation to describe continuous phenomena
    • Understanding power series and their balance between speed and accuracy
    • Circumventing the limitations of polynomials for data representation
    A polynomial is the simplest mathematical object malleable enough to present the concepts of approximation and interpolation well. You have an approximation when you have a complex mathematical object and want to represent it approximately with a simpler one, such as when you want to find a linear function that represents 1000 data points or when you want to emulate a trigonometric function using only multiplications and additions.
    The approximation is important in data representation when we want to show a general trend behind some data, and we’re fine with the approximating function missing the actual data points. But in a special case of approximation called interpolation, the approximating function goes through all the data points precisely. Interpolation is often used in descriptive geometry for building curves and surfaces. In chapter 8, we’ll learn to build surfaces using polynomial interpolation.
    Moreover, understanding polynomials and polynomial interpolation is a necessary step toward understanding polynomial splines, which we’ll discuss in chapter 7, and splines in general are staples of computer graphics and computer-aided design.

    6.1 What are polynomials?

    A polynomial is an expression that consists only of multiplications, additions, and non-negative exponentiations of variables and constant coefficients. In a canonical representation, the powers of the variable are sorted in reverse order, and every variable power n is multiplied by its coefficient. This is a canonical representation of a polynomial:
    P(x) = an xn + an -1 xn -1 ... + a2 x2 + a1 x + a
  • Fundamentals of Numerical Mathematics for Physicists and Engineers
    • Alvaro Meseguer(Author)
    • 2020(Publication Date)
    • Wiley
      (Publisher)
    Such is the case for Maxwell's equations of electromagnetism, Schrödinger's equation of quantum mechanics, or Navier–Stokes equations of fluid dynamics, to mention just a few. These are complicated equations involving the unknown functions (electric and magnetic fields, for example) and their corresponding derivatives with respect to space and time. In general, solving the differential equations arising in mathematical physics or engineering is a complicated task, only feasible in a limited number of cases. In numerical mathematics, it is common practice to approximate derivatives and integrals of arbitrary functions by an indirect method. The first step consists of approximating the function by means of polynomials. These polynomial combinations are intended to resemble the function to the highest possible accuracy within either a given domain (global approximation) or in the neighborhood of a point on the real axis (local approximation). The second step consists of differentiating or integrating these polynomials instead of the original function. If these polynomials are accurate approximations of the original function, their differentiation and integration are expected to be also very good approximations of the derivative and integral of the original function, respectively. The motivation for using polynomials is that they are easy to differentiate or integrate and, if they are suitably used, they also provide unbeatable accuracy. However, we will see in Part II that functions can also be approximated using more general formulations. There are different ways of approximating functions using polynomials, and in this chapter we will address one of them: polynomial interpolation. As we will see in Chapters 3 and 4, polynomial interpolation lies at the very heart of the most commonly used rules to approximate derivatives and integrals (quadratures) of functions
  • Applied Differential Equations with Boundary Value Problems
    3.3 The Polynomial Approximation
    One of the alternative approaches to the discrete numerical methods discussed in §3.2 is the Taylor series method. If the slope function
    f ( x , y )
    in the initial value problem
    (3.3.1)
    y
    = f
    ( x , y )
    , y
    (
    x 0
    )
    =
    y 0
    ,
    is sufficiently differentiable or is itself given by a power series, then the numerical integration by Taylor’s expansion is possible. In what follows, it is assumed that
    f ( x , y )
    possesses continuous derivatives with respect to both x and y of all orders required to justify the analytical operations to be performed.
    There are two ways in which a Taylor series can be used to construct an approximation to the solution of the initial value problem. One can either utilize the differential equation to generate Taylor Polynomials that approximate the solution, or one can use a Taylor polynomial as a part of a numeric integration scheme similar to Euler’s methods. We are going to illustrate both techniques.
    It is known from calculus that a smooth function g (x ) can be approximated by its Taylor series polynomial of the order n :
    (3.3.2)
    p n
    ( x )
    = g
    (
    x 0
    )
    +
    g
    (
    x 0
    )
    ( x -
    x 0
    )
    + +
    g
    ( n )
    (
    x 0
    )
    n !
    ( x -
    x 0
    )
    n
    ,
    which is valid in some neighborhood of the point x = x 0 . How good this approximation is depends on the existence of the derivatives of the function g (x ) and their values at x = x 0 , as the following statement shows (consult an advanced calculus course).
    Theorem 3.1: [Lagrange] Let g (x ) - p (x ) measure the accuracy of the polynomial approximation (3.3.2 of the function g (x ) that possesses (n + 1) continuous derivatives on an interval containing x 0 and x , then
    g
    ( x )
    -
    p n
    ( x )
    =
    g
    ( n + 1 )
    ( ξ )
    ( n + 1 ) !
    ( x -
    x 0
    )
    n + 1
    ,
    where ξ , although unknown, is guaranteed to be between x 0 and x .
    Therefore, one might be tempted to find the solution to the initial value problem (3.3.1
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.