Mathematics

Solving Simultaneous Equations Using Matrices

Solving simultaneous equations using matrices involves representing the coefficients of the equations in a matrix, and the variables in a column matrix. By performing matrix operations such as multiplication and inversion, the values of the variables can be determined. This method provides a systematic approach to solving systems of linear equations and is commonly used in various fields including engineering and physics.

Written by Perlego with AI-assistance

6 Key excerpts on "Solving Simultaneous Equations Using Matrices"

  • Principles of Linear Algebra with Mathematica
    • Kenneth M. Shiskowski, Karl Frinkle(Authors)
    • 2013(Publication Date)
    • Wiley
      (Publisher)

    Chapter 2

    Linear Systems of Equations and Matrices

    2.1 Linear Systems of Equations

    The basic idea behind a general linear system of equations is that of a system of two xy -plane line equations ax + by = c and dx + ey = f , for real constants a through f . Such a system of two-line equations in the xy -plane has a simultaneous solution , the intersection set of the two lines. In general, we consider a simultaneous solution to be the collection of points that simultaneously satisfies all equations in a given system of equations. In the two-line equation system example we have only three possibilities. If the two lines are not parallel, then their intersection set is a single point P; hence the simultaneous solution will simply be {P }. If the two lines are parallel and distinct, then their intersection set is empty and there is no simultaneous solution. Last, if the two lines are the same, the simultaneous solution is simply the entire line, which consists of an infinite number of points. We now give an example of each of these three situations. As you will see, the solution points of each of these three systems are where each pair of lines intersect. In these systems, in order to solve by hand for the solution points, we must try to solve for one of the variables x or y by itself without the other variable. In essence, we are assuming each pair of lines intersect at a single point unless we discover otherwise.
    Example 2.1.1 . Our first example is to solve the following system:
    (2.1)
    In order to solve for one of the variables x or y by itself without the other variable, we can multiply the first equation by 3 and multiply the second equation by 2, which makes the coefficients of y negatives of each other. Now, we can add the two equations together eliminating the variable y , and so solving for x . Similarly, we can solve for y
  • Engineering Mathematics with Applications to Fire Engineering
    • Khalid Khan, Tony Lee Graham(Authors)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    Hence, there are infinitely many solutions to this system. The three situations be generalized as (1) a unique solution, (2) no real solutions, and (3) infinitely many solutions. It’s definitely got something to do with the last row (or rows) of the echelon form matrix being zero. The rank of a matrix is the number of nonzero rows when it has been reduced to echelon form. The three preceding situations finished up looking like the following: Similar rules can be applied to any number of equations. 4.4.3 Matrix Inversion Method Two simultaneous linear equations can be written out in matrix form. Consider the following simultaneous equations: x + 2 y = 4 3 x − 5 y = 1 These can be written in matrix form as (1 2 3 − 5) (x y) = (4 1) Denoting A = (1 2 3 − 5), X = (x y), B = (4 1) This gives the following matrix equation AX = B. This is the matrix of the simultaneous equations. Here the unknown is the matrix X, since A and B are already known. A is called the matrix of coefficients. 4.4.3.1 Matrix Method for Solving Simultaneous Equations Given the system of equations in matrix form A X = B 4.12 4.12 Multiplying both sides of Equation 4.12 by the inverse of A, provided it exists, gives A − 1 A X = A − 1 B 4.13 4.13 But A −1 A = I, the identity matrix. Also, IX = X, so this leaves X = A − 1 B 4.14 4.14 This result given by Equation 4.14 gives a method for solving simultaneous equations. First write the equations in matrix form, calculate the inverse of the matrix of coefficients A −1, and finally perform a matrix multiplication. Note:When det(A) = 0 there are no solutions and so the matrix is not invertible because of division-by-zero problems. Example 4.22 Solve the simultaneous equations x + 2 y = 4 3 x − 5 y = 1 Solution: Putting these in matrix form gives (1 2 3 − 5) (x y) = (4 1) A X = B Now calculate the inverse
  • Fundamentals of University Mathematics
    • Colin McGregor, Jonathan Nimmo, Wilson Stothers(Authors)
    • 2010(Publication Date)
    Chapter 11

    Matrices and Linear Equations

    In Chapter 1 , we mentioned the fact that a pair (x, y ) of real numbers can be used to represent a point in the plane. In Chapter 6 , we used such a pair to represent a complex number. This is an example of an important aspect of mathematics— a mathematical object may be interpreted in several different ways. Once we have studied the mathematical objects, we have information which is available in every interpretation.
    In this chapter we introduce a new mathematical object—the matrix. An m × n matrix is a rectangular array of mn numbers. We can think of the matrix as consisting of m rows , each having n entries , or as n columns , each having m entries. For example,
    is a 2 × 3 matrix. We regard a matrix as a single entity, to be manipulated as a whole.
    We define addition on the set of m × n matrices. This operation has properties similar to the addition of real numbers. In particular, there is a
    zero matrix Om × n
    in the set which behaves like 0 in , i.e.
    Under certain circumstances, we can define multiplication of matrices. Now some of the results are unfamiliar. For example, there are matrices A and B for which AB BA.
    Matrices have many uses in mathematics and its applications. Here we consider just one—the solution of (systems of) linear equations. Looking at such a system leads to the introduction of new operations on matrices. We develop a systematic method for finding all the solutions.
    In the final sections, we return to the mathematical study of matrices. We consider the existence of matrix inverses. Along the way, we meet the concept of the determinant of a (square) matrix. This is an important topic in its own right and will be used in Chapter 13 in connection with the vector product.

    11.1 Basic Definitions

    Definitions 11.1.1 Let be a number system. A matrix defined over is a rectangular array of elements of . These elements are the entries of the matrix. If a matrix A has m rows (horizontal lines of entries) and n columns (vertical lines), then A is of type m × n or is an m × n bmatrix.
  • Essentials of Scientific Computing
    eBook - ePub

    Essentials of Scientific Computing

    Numerical Methods for Science and Engineering

    3

    Solving systems of linear equations

    This chapter considers one of the cornerstones of numerical analysis, namely, solving systems of linear equations. Various properties of systems of linear equations (such as structure, definiteness, and conditionality) are discussed. The main principles of direct methods are considered briefly, just to provide the information needed to use modern software. Particular emphasis is placed on the iterative methods because, in many instances, they are more efficient than the direct methods.
    Notation
    A ={
    anm
    }
    matrix (uppercase bold letter)
    x vector (lowercase bold letter)
    N matrix or vector size
    ||·||matrix or vector norm
    A T , x T transposed matrix or vector
    (x , y )=x T y dot product of vectors x and y
    SPDsymmetric positive definite (matrix)
    det(A )matrix determinant
    diag(a 1 , …,
    aN
    )
    diagonal matrix with elements a 1 , …,
    aN
    I identity matrix (I =diag(l, …, 1))
    A −1 inverse matrix (A −1 A =I )
    ɛp
    prescribed accuracy for calculating the approximate solution
    ep
    perturbation error
    ea
    algorithm error
    e τ rounding error
    x* true solution of a linear system (Ax*≡f )
    x
    (k )
    k th approximation to the true solution
    e
    (k )
    = x* x
    (k )
    error of k th approximation
    r
    (k )
    =Ax
    (k )
    f
    residual vector
    z n (B )n th eigenvector of matrix B
    λn
    (B )
    n th eigenvalue of matrix B
    s (B )=max|
    λn
    (B )|
    spectral radius of matrix B
    N ops total number of operations (additions and multiplications/divisions) required to solve a system of linear equations

    3.1 LINEAR ALGEBRA BACKGROUND

    To begin with, we will consider some basic definitions from linear algebra needed for understanding of matrix computations. We will restrict our consideration to the case when matrices and vectors are real. Vectors, then, are objects of the N-dimensional Euclidean space
    RN
    . A list of the operations defined in
    RN
  • Numerical Methods in Engineering and Science
    eBook - ePub
    3 Solution of SimultaneousAlgebraic Equations Chapter Objectives  
    • Introduction to determinants
    • Introduction to matrices
    • Solution of linear simultaneous equations
    • Direct methods of solution: Cramer’s rule, Matrix inversion method, Gauss elimination method, Gauss-Jordan method, Factorization method
    • Iterative methods of solution: Jacobi’s method, Gauss-Seidal method, Relaxation method
    • Ill-conditioned equations
    • Comparison of various methods
    • Solution of non-linear simultaneous equations—Newton-Raphson method
    • Objective type of questions
    3.1   Introduction t to Determinants
     
    1. Definition. The expression is called a determinant of the second order and stands for ‘a1 b2 a2 b1 ’. It contains four numbers a1 , b1 , a2 , b2 (called elements) which are arranged along two horizontal lines (called rows) and two vertical lines (called columns).
    is called a determinant of the third order. It consists of nine elements which are arranged in three rows and three columns.
    In general, a determinant of the nth order is of the form
    which is a block of n2 elements in the form of a square along n rows and n columns. The diagonal through the left- hand top corner which contains the elements a11 , a22 , a33 , …, ann is called the leading diagonal.
    Expansion of a determinant. The cofactor of an element in a determinant is the determinant obtained by deleting the row and the column which intersect at that element, with the proper sign. The sign of an element in the ith row and jth column is (–1)
    i+j
  • Linear Algebra
    eBook - ePub

    Linear Algebra

    An Inquiry-Based Approach

    • Jeff Suzuki(Author)
    • 2021(Publication Date)
    • CRC Press
      (Publisher)
    all equations are in standard form, we’ll be able to work with the vectors as easily as we could have worked with the equations.
    For example, the system of equations
    3 x + 5 y
    = 11
    2 x 7 y
    = 4
    could be represented as the vectors
    3 , 5 , 11
    and
    2 , 7 , 4
    , where the first component is the coefficient of x; the second component is the coefficient of y; and the last component is the constant.
    Actually, if we write our two equations as two vectors, we might forget that they are in fact related. To reinforce their relationship, we’ll represent them as a single object by throwing them inside a set of parentheses. We might represent the evolution of our notation as
    {
    3 x + 5 y
    = 11
    2 x 7 y
    = 4
    3 , 5 , 11
    2 , 7 , 4
    (
    3 5 11
    2
    7
    4
    )
    As a general rule, mathematicians are terrible at coming up with new words, so we appropriate existing words that convey the essential meaning of what we want to express.
    In geology, when precious objects like gemstones or fossils are embedded in a rock, the rock is called a matrix. The term is also used in biology for the structure that holds other precious objects—the internal organs—in place, and in construction for the mortar that holds bricks in place. Thus in 1850, the mathematician James Joseph Sylvester (1814–1897) appropriated the term for the structure that keeps the coefficients and constants of a linear equation in place.
    Since the matrix shown includes both the coefficients of our equation and the constants, we can describe it as the coefficient matrix augmented by a column of constants or, more simply, the augmented coefficient matrix
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.