Mathematics

Fundamental Theorem of Algebra

The Fundamental Theorem of Algebra states that every non-constant polynomial equation with complex coefficients has at least one complex root. In other words, every polynomial of degree n has n roots, counting multiplicities. This theorem is a fundamental result in algebra and has many applications in various fields of mathematics.

Written by Perlego with AI-assistance

3 Key excerpts on "Fundamental Theorem of Algebra"

  • Algebra & Geometry
    eBook - ePub

    Algebra & Geometry

    An Introduction to University Mathematics

    In this section, polynomials will be defined only over the complex (or real) numbers. The big question we have so far not answered is whether such a polynomial need have any roots at all. The answer is given by the following theorem. Its name reflects its importance when first discovered, though not its significance in modern algebra, since algebra is no longer just the study of polynomial equations. It was first proved by Gauss.
    Theorem 7.4.1 (Fundamental Theorem of Algebra). Every non-constant polynomial of degree n with complex coefficients has at least one root.
    Box 7.1: Liouville’s Theorem
    We cannot give a complete proof of the Fundamental Theorem of Algebra here, but we can at least describe the ingredients of the most popular one. Calculus can be generalized from real variables to complex variables. The theory that results, called complex analysis, is profoundly important in mathematics. A complex function that can be differentiated everywhere is said to be entire. An example of a differentiable function that is not entire is
    z
    1 z
    . The problem is that this function is not defined at z = 0. Suppose that zf(z) is an entire function. We are interested in the behaviour of the real-valued function z ↦|f(z)|. If there is some real number B > 0 such that |f(z)| ≤ B for all
    z
    , we say that the f(z) function is bounded. Clearly, constant functions za, where
    a
    is fixed, are bounded entire functions. We can now state Liouville’s theorem: every bounded entire function is constant. This can be used to prove the Fundamental Theorem of Algebra as follows. Let zp(z) be a polynomial function, not constant, and assume that p(z) has no roots. Then
    z
    1
    p ( z )
    is an entire function. We now proceed to get a contradiction using Liouville’s theorem. If you look at how polynomials are defined, it is not hard to believe, and it can be proved, that there are real numbers r > 0 and B > 0 such that |p(z)| ≥ B for all |z| > r. This means that
    |
    1
    p ( z )
    |
    B
    for all |z| > r. We now have to look at the behaviour of the function
    z
    1
    p ( z )
    when |z| ≤ r. It is a theorem that just by virtue of this function being continuous it is bounded. It follows that the function
    z
    1
    p ( z )
    is a bounded entire function and so by Liouville’s theorem must be constant, which is nonsense. We conclude that the polynomial p(z
  • Taming the Unknown
    eBook - ePub

    Taming the Unknown

    A History of Algebra from Antiquity to the Early Twentieth Century

    The second, however, stemmed from Maclaurin’s remark that complex roots of polynomial equations with real coefficients always occur in pairs. Thus, the Fundamental Theorem of Algebra was also rendered as every real polynomial can be factored into real polynomials of degree one or two. In the latter form, the theorem implied—importantly and in light of the seventeenth-century development of the calculus at the hands of both Newton and Gottfried Leibniz (1646–1716)—that every rational function could be integrated using elementary functions via the method of partial fractions. It also enabled the Swiss mathematician, Leonhard Euler (1707–1783), to give a general method for solving linear ordinary differential equations. In fact, in a letter written on the topic in 1739 to his friend and fellow countryman Johann Bernoulli (1667–1748), Euler claimed that any real polynomial 1 − ap + bp 2 − cp 3 + · · · could “always be put in the form of a product of factors either simple, 1 − αp, or of two dimensions 1 − αp + βpp, all real.” 70 At the beginning of the eighteenth century, some mathematicians continued to be skeptical of the fundamental theorem. In 1702, Leibniz, for one, believed that he had found a counterexample, 71 namely, the polynomial x 4 + a 4, which he factored as The roots of the second factor, namely,, were not obviously complex numbers, thus violating the first version of the theorem. In fact, even though mathematicians of the day understood that solutions of cubic and quartic equations could be found by formulas involving radicals, there was no consensus until the middle of the eighteenth century that roots of complex numbers always produced complex numbers
  • Mathematics for the Physical Sciences
    chapter 3

    The roots of polynomial equations

    3.1 INTRODUCTION

    A function f (z ) of the form
    (1)
    of the complex variable z , with complex coefficients a 0 , a 1 , . . . , a n is a polynomial of degree n . A complex number z 0 , having the property that f (z 0 ) = 0 is called a root of the equation f (z ) = 0, or a zero of the polynomial f (z ). We will assume throughout, for simplicity, that in (1) a 0 ≠ 0 and an ≠ 0, which can always be achieved in a trivial manner.
    We further assume that the reader is familiar with the fact that f (z ) of (1) always has exactly n zeros z 1 , z 2 , . . . , z n in the complex plane and may be factored in the form
    (2)
    Our concern in this chapter is almost entirely with the analytic (as opposed to the algebraic) theory of polynomial equations. Roughly speaking, this theory is concerned with describing the position of the zeros in the complex plane, without actually solving the equation, as accurately as possible in terms of easily calculated functions of the coefficients.
    Specifically, we list the following questions, all of which are answered more or less completely in the following sections:
    1. Suppose we know the zeros of f (z ). What can be said about the zeros f ′(z ) ?
    2. What circle |z | R , in the complex plane, surely contains all the zeros of f (z )?
    3. How many zeros does f (z ) have in the left (right) half plane? In the unit circle? On the real axis? On the real interval [a , b ]? In the sector α arg z β ?
    4. How can we efficiently calculate the zeros of f (z )?

    3.2 THE GAUSS-LUCAS THEOREM

    Let us recall—from elementary calculus—the theorem of Rolle, which asserts that if f (a ) = f (b ) = 0, then f ′(x ) = 0 somewhere between a and b, f (x ) being continuously differentiable in (a , b ). Viewed otherwise, this theorem states that if z 1 , z 2 are two real zeros of f (z ), then f ′(z ) has a zero somewhere between z 1 , z 2 . We propose to generalize this result to the case of arbitrary complex zeros, z 1, z 2 , . . . , z n
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.