Mathematics

Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors are concepts in linear algebra. Eigenvalues are scalars that represent how a linear transformation stretches or compresses a vector, while eigenvectors are the corresponding non-zero vectors that remain in the same direction after the transformation. They are used to analyze and understand the behavior of linear transformations and systems of linear equations.

Written by Perlego with AI-assistance

7 Key excerpts on "Eigenvalues and Eigenvectors"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • Mathematics for Economics and Finance
    • Michael Harrison, Patrick Waldron(Authors)
    • 2011(Publication Date)
    • Routledge
      (Publisher)

    ...Eigenvalues and Eigenvectors DOI: 10.4324/9780203829998-5 3.1 Introduction Some of the basic ideas and issues encountered in the previous chapters are often covered in an introductory course in mathematics for economics and finance. The fundamental ideas of Eigenvalues and Eigenvectors and the associated theorems introduced in this chapter are probably not. Many readers are therefore likely to be encountering these concepts for the first time. Hence this chapter begins by providing definitions and illustrations of Eigenvalues and Eigenvectors, and explaining how they can be calculated. It goes on to examine some of the uses of these concepts and to establish a number of theorems relating to them that will be useful when we return to the detailed analysis of our various applications. 3.2 Definitions and illustration Eigenvalues and Eigenvectors arise in determining solutions to equations of the form A x = λ x (3.1) where A is an n × n matrix, x is a non-zero n -vector and λ is a scalar, and where the solution is for λ and x, given A. We shall call equations like (3.1) eigenequations. The scalar λ is called an eigenvalue of A, while x is known as an eigenvector of A associated with λ. Sometimes the value, λ, and the vector, x, are called the proper, characteristic or latent value and vector. Consider the matrix A = [ 2 0 8 − 2 ] and the vector x = [ 1 2 ]. Since A x = [ 2 0 8 − 2 ] [ 1 2 ] = [ 2 4 ] = 2 x (3.2) λ = 2 is an eigenvalue of A and x is an associated eigenvector. It is easy to check, by substituting into the eigenequation (3.1), that another eigenvector of A associated with λ = 2 is [−1 − 2] ┬. Likewise, another eigenvalue of A is −2, which has associated with it eigenvectors such as [0 1] ┬ and [0 − 1] ┬. Thus, for a given λ, we note that there are multiple associated eigenvectors. For given λ and x, A may be viewed as the matrix that, by pre-multiplication, changes all of the elements of x by the same proportion, λ...

  • Introductory Mathematical Economics
    • Adil H. Mouhammed(Author)
    • 2020(Publication Date)
    • Routledge
      (Publisher)

    ...Chapter One Vectors and Matrices Vectors and matrices are very important tools in economic analysis. This chapter outlines the algebra of vectors. Students will not only be able to add and subtract vectors but also to multiply vectors. For economic modeling, linear independence is indispensable in that economic models must have unique solutions, and linear independence provides these solutions. Similarly, linear and convex combinations are also important for economics. An extension of vector analysis is matrix algebra. The types of matrices and matrix operations are outlined in this chapter. A system of equations is solved by using matrix methods such as the inverse method, Cramer’s rule and the Gauss-Jordan method. Eigenvalues and vectors are explained and used for diagonalizing matrices. Finally, various economic applications of matrices are provided. Vectors A vector is a symbol used to refer to a set of variables or coefficients. For example, Y = (y 1, y 2, y 3, …, y n), where Y is used to represent n-variables. Vector A can be represented by A = (a 1, a 2, a 3, …, a n), where A contains a set of coefficients denoted by a’s. Also, vector Y can also be a set of real integer numbers such as Y = (3, 4, 6, 8). Vector Y has a dimension as well. The dimension of a vector is the total number of elements (components) in that vector. For example, the above vector, Y, has four components and hence it is said to be of dimension four. Very compactly, a vector such as Y, where Y = (5, 6, 7), has one row and three columns. Accordingly, it is said to be a vector of dimension 1 by 3 or Y 1 × 3 or Y 13, where 1 and 3 indicate number of rows and columns, respectively. In general, Y m × n is said to be a vector of dimension m × n: with m rows and n columns. Example: A = (3 3), B = (5 6 7), C = [ 4 6 7 ] All these vectors have different dimensions: A is 1 × 2, B is 1 × 3, and A is 3 × 1...

  • Quantitative Methods for Business and Economics
    • Adil H. Mouhammed(Author)
    • 2015(Publication Date)
    • Routledge
      (Publisher)

    ...CHAPTER ONE Vectors and Matrices Vectors and matrices are very important tools in economic analysis. This chapter outlines the algebra of vectors. Students will not only be able to add and subtract vectors but also to multiply vectors. For economic modeling, linear independence is indispensable in that economic models must have unique solutions, and linear independence provides these solutions. Similarly, linear and convex combinations are also important for economics. An extension of vector analysis is matrix algebra. The types of matrices and matrix operations are outlined in this chapter. A system of equations is solved by using matrix methods such as the inverse method, Cramer’s rule and the Gauss-Jordan method. Eigenvalues and vectors are explained and used for diagonalizing matrices. Finally, various economic applications of matrices are provided. Vectors A vector is a symbol used to refer to a set of variables or coefficients. For example, Y = (y 1, y 2, y 3,..., y n), where Y is used to represent n-variables. Vector A can be represented by A = (a 1, a 2, a 3,..., a n), where A contains a set of coefficients denoted by a’s. Also, vector Y can also be a set of real integer numbers such as Y = (3, 4, 6, 8). Vector Y has a dimension as well. The dimension of a vector is the total number of elements (components) in that vector. For example, the above vector, Y, has four components and hence it is said to be of dimension four. Very compactly, a vector such as Y, where Y = (5, 6, 7), has one row and three columns. Accordingly, it is said to be a vector of dimension 1 by 3 or Y 1 × 3 or Y 1 3, where 1 and 3 indicate number of rows and columns, respectively. In general, Y m x n is said to be a vector of dimension m x n: with m rows and n columns. Example: All these vectors have different dimensions: A is 1 × 2, B is 1 × 3, and A is 3 × 1...

  • Data Science and Machine Learning
    eBook - ePub

    Data Science and Machine Learning

    Mathematical and Statistical Methods

    • Dirk P. Kroese, Zdravko Botev, Thomas Taimre, Radislav Vaisman(Authors)
    • 2019(Publication Date)

    ...The corresponding unit eigenvectors are v 1 ≈ [0.3339, −0.9426] ⊤ and v 2 ≈ [0.9847, −0.1744] ⊤. The eigenspace corresponding to λ 1 is V 1 = Span { v 1 } = { β v 1 : β ∈ ℝ } and the eigenspace corresponding to λ 2 is V 2 = Span { v 2 }. The algebraic and geometric multiplicities are 1 in this case. Any pair of vectors taken from V 1 and V 2 forms a basis for ℝ 2. Figure A.3 shows how v 1 and v 2 are transformed to A v 1 ∈ V 1 and A v 2 ∈ V 2, respectively. Figure A.3: The dashed arrows are the unit eigenvectors v 1 (blue) and v 2 (red) of matrix A. Their transformed values A v 1 and A v 2 are indicated by solid arrows. ■ A matrix for which the algebraic and geometric multiplicities of all its eigenvalues are the same is called semi-simple. This is equivalent to the matrix being diagonalizable, meaning that there is a matrix V and a diagonal matrix D such that A = V D V − 1. semi-simple diagonalizable To see that this so-called eigen-decomposition holds, suppose A is a semi-simple matrix with eigenvalues λ 1, …, λ 1, ︸ d 1 ⋯, λ r, …, λ r ︸ d r. eigen-decomposition Let D be the diagonal matrix whose diagonal elements are the eigenvalues of A, and let V be a matrix whose columns are linearly independent eigenvectors corresponding to these eigenvalues. Then, for each (eigenvalue, eigenvector) pair (λ, v), we have A v = λ v. Hence, in matrix notation, we have AV = VD, and so A = VDV − 1. A.5.1 Left- and Right-Eigenvectors The eigenvector as defined in the previous section is called a right-eigenvector, as it lies on the right of A in the equation A v = λ v. If A is a complex matrix with an eigenvalue λ, then the eigenvalue’s complex conjugate λ is an eigenvalue of A *. To see this, define B ≔ λ I − A and B * := λ ¯ I − A *. Since λ is an eigenvalue, we have det (B) = 0. Applying the identity det (B) = det (B *) ¯, we see that therefore det (B *) = 0, and hence that λ ¯ is an eigenvalue of A *. Let w be an eigenvector corresponding to λ ¯...

  • An Introduction to Econometric Theory
    • James Davidson(Author)
    • 2018(Publication Date)
    • Wiley
      (Publisher)

    ...9 Eigenvalues and Eigenvectors This chapter represents something of a change of gear in the study of matrices. So far, we have learned how to do various calculations using matrix notation and exploit the fairly simple rules of matrix algebra. Most importantly, we have used linearity to calculate the expected values of matrix functions of random variables and so computed means and variances of estimators. However, all these calculations could in principle have been done using ‘sigma’ notation with scalar quantities. Matrix algebra simply confers the benefits of simplicity and economy in what would otherwise be seriously complicated calculations. What happens in this chapter is different in kind because the methods don't really have any counterparts in scalar algebra. One enters a novel and rather magical world where seemingly intractable problems turn out to have feasible solutions. Careful attention to detail will be necessary to keep abreast of the new ideas. This is a relatively technical chapter, and some of the arguments are quite intricate. This material does not all need to be completely absorbed in order to make use of the results in the chapters to come, and readers who are happy to take such results on trust may want to skip or browse it at first reading. The material essential for understanding least squares inference relates to the diagonalization of symmetric idempotent matrices. 9.1 The Characteristic Equation Let be square, and consider for a scalar the scalar equation (9.1) This is the determinant of the matrix formed from by subtracting from each of its diagonal elements. It is known as the characteristic equation of. By considering the form of the determinant, as in (3.8), this is found to be a ‐order polynomial in. The determinant is a sum of terms, of which one is the product of the diagonal elements. When multiplied out, it must contain the term, and in general, powers of of all orders up to appear in one term or another...

  • Statistical Methods for the Social and Behavioural Sciences

    ...symmetric matrices, such as covariance and correlation matrices. The eigenstructure of a symmetric matrix S can be expressed as where the columns of U are eigenvectors and D is a diagonal matrix with eigenvalues as diagonal elements (this D is not the same as the D matrix used earlier to calculate a covariance matrix). Eigenvalues and Eigenvectors are also sometimes called characteristic roots and vectors. Taking a few steps back, if S is a (P × P) symmetric matrix, u is a vector of length P, and λ is a scalar such that then λ is an eigenvalue of S and u is the corresponding eigenvector. The potential value of this relation may seem readily apparent: On the left-hand side of the equation, u is premultiplied by a full (P × P) matrix (i.e., S), but on the right-hand side, this matrix has been replaced by a single scalar term (i.e., λ). It is as if all of the numerical information in S has been collapsed into a single value, λ. But for any given symmetric matrix, there will generally be more than one nonzero eigenvalue–eigenvector combination which satisfies Equation 7.2. Specifically, a (P × P) symmetric matrix has P eigenvalues that are typically ordered from largest to smallest, λ 1 ≥ λ 2 ≥ … ≥ λ P, and are respectively associated with eigenvectors u 1, u 2, …, u P. The eigenvalues are arranged in descending order as the diagonal elements of a diagonal matrix and the corresponding eigenvectors can be collected as columns in a matrix U (such that the first column of U is u 1, the second column is u 2, and so on). Next, Equation 7.2 can be expanded to SU = UD, now showing that the original symmetric matrix multiplied by the set of eigenvectors can be expressed as the product of the eigenvectors and a simple diagonal matrix containing the eigenvalues. The matrix of eigenvectors U is orthogonal, meaning that UU ′ = I...

  • Foundations of Vibroacoustics
    • Colin Hansen(Author)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)

    ...The polynomial, |λ I – A | = 0, is referred to as the characteristic equation of A. The solutions to the characteristic equation are the eigenvalues of A. If λ i is an eigenvalue of A, then there exists at least one vector, q i, that satisfies the relationship: A q i = λ i q i (A.34) The vector, q i, is an eigenvector of A. If the eigenvalue, λ i, is not repeated, then the eigenvector, q i, is unique. If an eigenvector, λ i, is real, then the entries in the associated eigenvector, q i, are real; if λ i is complex, then so too are the entries in q i. The eigenvalues of a Hermitian matrix are all real, and if the matrix is also positive definite, the eigenvalues are also all positive. If a matrix is symmetric, then the eigenvalues are also all real. Further, it is true that: | A | = ∏ i = 1 n λ i (A.35) If A is singular, then there is at least one eigenvalue equal to zero. A.10  Orthogonality If a square matrix, A, has the property, A H A = AA H = I, then the matrix, A, is said to be orthogonal. The eigenvalues of A then have a magnitude of unity. If q i is an eigenvector associated with λ i, and q j is an eigenvector associated with λ j, and if λ i ≠ λ j and q i H q j = 0, then the vectors, q i and q j, are said to be orthogonal. The eigenvectors of a Hermitian matrix are all orthogonal. Further, it is common to normalise the eigenvectors such that q i H q j = 1, in which case the eigenvectors are said to be orthonormal. A set of orthonormal eigenvectors can be expressed as columns of a unitary matrix, Q : Q = (q 1, q 2, ⋯ q n) (A.36) which means that: Q H Q = Q Q H = I (A.37) The set of equations that define the eigenvectors, expressed for a single eigenvector in Equation (A.34), can now be written in matrix form as: A Q = Q Λ (A.38) where Λ is the diagonal matrix of eigenvalues: Λ = [ λ 1 0 ⋯ 0 0 λ 2 ⋯ 0 ⋱ 0 0 ⋯ λ n ] (A.39) Post-multiplying both sides of Equation...