Mathematics

Operations with Matrices

Operations with matrices involve addition, subtraction, and multiplication of matrices. When adding or subtracting matrices, corresponding elements are combined. For matrix multiplication, the number of columns in the first matrix must equal the number of rows in the second matrix. The resulting matrix has dimensions equal to the number of rows in the first matrix and the number of columns in the second matrix.

Written by Perlego with AI-assistance

12 Key excerpts on "Operations with Matrices"

  • Fundamentals of University Mathematics
    • Colin McGregor, Jonathan Nimmo, Wilson Stothers(Authors)
    • 2010(Publication Date)
    Chapter 11

    Matrices and Linear Equations

    In Chapter 1 , we mentioned the fact that a pair (x, y ) of real numbers can be used to represent a point in the plane. In Chapter 6 , we used such a pair to represent a complex number. This is an example of an important aspect of mathematics— a mathematical object may be interpreted in several different ways. Once we have studied the mathematical objects, we have information which is available in every interpretation.
    In this chapter we introduce a new mathematical object—the matrix. An m × n matrix is a rectangular array of mn numbers. We can think of the matrix as consisting of m rows , each having n entries , or as n columns , each having m entries. For example,
    is a 2 × 3 matrix. We regard a matrix as a single entity, to be manipulated as a whole.
    We define addition on the set of m × n matrices. This operation has properties similar to the addition of real numbers. In particular, there is a
    zero matrix Om × n
    in the set which behaves like 0 in , i.e.
    Under certain circumstances, we can define multiplication of matrices. Now some of the results are unfamiliar. For example, there are matrices A and B for which AB BA.
    Matrices have many uses in mathematics and its applications. Here we consider just one—the solution of (systems of) linear equations. Looking at such a system leads to the introduction of new operations on matrices. We develop a systematic method for finding all the solutions.
    In the final sections, we return to the mathematical study of matrices. We consider the existence of matrix inverses. Along the way, we meet the concept of the determinant of a (square) matrix. This is an important topic in its own right and will be used in Chapter 13 in connection with the vector product.

    11.1 Basic Definitions

    Definitions 11.1.1 Let be a number system. A matrix defined over is a rectangular array of elements of . These elements are the entries of the matrix. If a matrix A has m rows (horizontal lines of entries) and n columns (vertical lines), then A is of type m × n or is an m × n bmatrix.
  • 3D Math Primer for Graphics and Game Development
    Chapter 4

    Introduction to Matrices

    Unfortunately, no one can be told what the matrix is. You have to see it for yourself.
    — Morpheus in The Matrix (1999)
    Matrices are of fundamental importance in 3D math, where they are primarily used to describe the relationship between two coordinate spaces. They do this by defining a computation to transform vectors from one coordinate space to another.
    This chapter introduces the theory and application of matrices. Our discussion will follow the pattern set in Chapter 2 when we introduced vectors: mathematical definitions followed by geometric interpretations.
    • Section 4.1 discusses some of the basic properties and operations of matrices strictly from a mathematical perspective. (More matrix operations are discussed in Chapter 6 .)
    • Section 4.2 explains how to interpret these properties and operations geometrically.
    • Section 4.3 puts the use of matrices in this book in context within the larger field of linear algebra.

    4.1   Mathematical Definition of Matrix

    In linear algebra, a matrix is a rectangular grid of numbers arranged into rows and columns . Recalling our earlier definition of vector as a one-dimensional array of numbers, a matrix may likewise be defined as a two-dimensional array of numbers. (The “two” in “two-dimensional array” comes from the fact that there are rows and columns, and should not be confused with 2D vectors or matrices.) So a vector is an array of scalars, and a matrix is an array of vectors.
    This section presents matrices from a purely mathematical perspective. It is divided into eight subsections.
  • A First Course in Linear Algebra
    • Minking Eie, Shou-Te Chang(Authors)
    • 2016(Publication Date)
    • WSPC
      (Publisher)

    CHAPTER 4

    Elementary Matrix Operations

    We have seen that linear transformations and matrices are the two faces of the same thing. We may solve a problem on linear transformations by working on matrices. We may also solve a problem on matrices by working on linear transformations. In this chapter we will discuss how to handle matrices. It will let us answer more difficult questions.
    Throughout this chapter all vector spaces are over and all matrices have entries in unless otherwise noted.

    4.1 Elementary matrix operations

    In this section we will discuss some of the most important invertible matrices.
    Remember that in §1.4 we used the elementary row operations to change the augmented matrix of a system of linear equations into its reduced echelon form. This will help us solve the given system of linear equations. We will carry on the discussion of elementary row (or column) operations in more detail in this section.
    Row and column operations. Before we start let’s first make an observation. Let AM
    m×n
    ( ). For our purpose we will view A as a column m-vector with row n-vectors as its entries. In other words, we may view A as
    where ri is a row n-vector for each i. Notice that
    If we multiply a row m-vector to the left of A we obtain a linear combination of the rows of A. More generally,
    If we multiply an ℓ × m matrix to the left of A, we obtain an ℓ × n matrix whose rows are linear combinations of the rows of A. Thus we can see that to multiply a matrix to the left of A is equivalent to performing a row operation on A.
    Similarly, if we view A as a row n-vector with column m-vector as its entries,
    we have that
    If we multiply an n × p matrix to the right of A, we obtain an m × p matrix whose columns are linear combinations of the columns of A. Thus we can see that to multiply a matrix to the right of A is equivalent to performing a column operation on A
  • Maths for Chemists
    eBook - ePub
    11 Working with Arrays II: Matrices and Matrix Algebra In the previous chapter, we saw how determinants are used to tackle problems involving the solution of systems of linear equations. In general, the branch of mathematics which deals with linear systems is known as linear algebra, in which matrices and vectors play a dominant role. In this chapter we shall explore how matrices and matrix algebra are used to address problems involving coordinate transformations, as well as revisiting the solution of sets of simultaneous linear equations. Vectors are explored in Chapter 12. Matrices are two-dimensional arrays (or tables) with specific shapes and properties: Their key property is that they give us a formalism for systematically handling sets of objects – called elements – which, for example, can be numbers, chemical property values, algebraic quantities or integrals. Superficially, matrices resemble determinants, insofar as they are constructed from arrays of elements. However, as we shall see, they are really quite distinct from one another
  • Introduction to Linear Algebra
    eBook - ePub

    Introduction to Linear Algebra

    A Primer for Social Scientists

    • Gordon Mills(Author)
    • 2017(Publication Date)
    • Routledge
      (Publisher)
    main diagonal. Examples :
    [
    1 2 3
    1 7 9
    ]
    ,
    [
    2 7
    3
    0
    ]
    ,
    [
    0 0
    0 1
    ]
    ,
    [
    3
    3
    ]
    are all matrices. But the following are not matrices since they do not have their elements arranged in rectangular arrays comprising rows and columns :
    [
    0 1
    0
    ]
    ,
    [
    2 3
    4
    ]

    3.2 The first matrix operations

    A matrix is merely a table of numbers. Apart from being a convenient way of recording certain types of numerical data, it has no particular value in itself unless we can define operations which may be performed on it. As with the vector operations, it is not enough to invent operations which are logically consistent; we want operations which are useful. In the present section, we deal principally with addition and subtraction where the obvious choice is just about the only conceivable definition. In a later section, the discussion of the problem of defining matrix multiplication will show that more than one definition is not only possible but potentially useful.
    Definition, Two matrices A and B are defined to be equal (written A = B ) if and only if the corresponding elements are equal, i.e. if and only if
    aij = bij
    for all i and j (i = 1,.…,m ;j = l,.…,n ).
    Note that two matrices must be of the same order (i.e. one must have the same number of rows as the other, and similarly for columns) before they can be said to be equal. Bearing in mind that a vector is a particular kind of matrix, we now see why it is desirable to distinguish between a row vector and a column vector even when the two have identical elements.
  • Linear Algebra and Matrix Theory
    CHAPTER 3

    BASIC OPERATIONS FOR MATRICES

    3.1. The Matrix of a System of Linear Equations. In solving a system of linear equations by the elimination method of Chap. 1 , it becomes evident that the unknowns serve merely as marks of position and that the computations are performed upon the array of coefficients. Once an order for the unknowns and the equations has been selected, it becomes a timesaving device to list the coefficients in an orderly array and operate upon the rows.
    DEFINITION . An m × n matrix A over a field F is a rectangular array of mn elements in F, arranged in m rows and n columns as follows :
    The line of coefficients
    is called the i th. row of A , and the line
    is called the j th column of A . Thus an m × n matrix can be regarded as a collection of m row vectors i or, alternatively, as a collection of n column vectors i . The entries of A are so indexed that the element occurs at the intersection of the i th row and j th column. Without exception the upper index of denotes the row, while the lower denotes the column in which it occurs. The whole matrix may be written compactly as
    A matrix with the same number of rows as columns is called square , and the common number is the order of the matrix. Since A in (1) is the array of coefficients of the unknowns in the typical system of linear equations (N) of Chap. 1 , it is called the coefficient matrix of (N). If A is extended to n + 1 columns by adjoining the column vector Y = (y 1 , y 2 , …, y m )T of (N), the resulting matrix is called the augmented matrix of (N).
    Sometimes we shall prefer to write the matrix A of (1) with double subscripts:
    When this is done, it will always be understood that a ij is the element at the intersection of the i th row and j th column.
    In this chapter the basic properties of, and operations for, matrices will be discussed in the light of their applications. In the first part of the chapter the applications to systems of linear equations provide the motivation; then a problem in connection with coordinate systems in vector spaces suggests additional notions for study.
  • Applied Matrix Algebra in the Statistical Sciences
    Chapter 3

    Matrices and Systems of Linear Equations

    3.1 Introduction

    A vector is an ordered array of numbers that determine the magnitude and direction of the vector. A further generalization is that of a matrix A = (a ij ) denoted as the n × k rectangular array
    (3.1)
    with typical element a ij (i = 1,2,...,n ; j = 1, 2, ... ,k ). A vector can therefore also be thought of as a n × 1 column or a 1 × k row matrix. Alternatively, a matrix may be viewed as a collection of n row vectors
    or k column vectors
    (3.2)
    where
    (3.3)
    for i = 1,2,...,n and j = 1 ,2 ,...,k .
    Matrices derive their importance in research owing to the fact that most of the data in the social, economic, geographical, bioecological, and medical or health sciences can be represented as rectangular arrays of real numbers, where columns, say, represent the measurements or variables and rows represent a multivariate sample of the objects, persons, or land areas for which the measurements are made. Treating such tables as matrices enables the research worker to carry out complex data manipulation within a unified context, independently of the size of the data. Analyses of large data sets have become common with the widespread availability of high-speed electronic computers.

    3.2 General Types of Matrices

    Before considering matrix manipulations such as addition, multiplication, and inversion, we first examine certain special forms that matrices can assume. In what follows, the zero matrix, which contains zeros as elements, is denoted by 0. Generally, a matrix can contain zero elements, nonzero elements, or both. The zero elements are at times distributed in particular patterns giving rise to specific matrix forms.

    3.2.1 Diagonal Matrix

    One of the simplest and most commonly occurring matrices is the diagonal matrix. Generally speaking, a square n × n matrix A = (a ij ) is said to be in diagonal form when all the elements of A
  • Introduction to Linear Algebra, 2nd edition
    • Thomas A Whitelaw(Author)
    • 2019(Publication Date)
    • Routledge
      (Publisher)
    CHAPTER TWO MATRICES 11.    Introduction
    The purpose of this chapter is to give an account of the elementary properties of matrices—the rectangular arrays of numbers (or other coefficients) that arise from systems of linear equations and in other contexts. In particular, the chapter will introduce operations such as addition and multiplication of matrices and will consider the properties of these operations.
    When we come to discuss matrices and related ideas in linear algebra, we always have in mind a system of scalars—numbers or number-like objects which may arise as coefficients in equations or in other ways. In elementary work the system of scalars is likely to be ℝ (the set of real numbers) or perhaps ℂ (the set of complex numbers), but, from a more advanced point of view, these are just two among many possibilities. Naturally, one wants to construct a body of theory that takes account of these many possibilities and is applicable to the broadest possible variety of sensible systems of scalars. This generality can, without complicating matters, be built into our study of linear algebra right from the start—simply by declaring that we are using as set of available scalars a system F (which could be ℝ or ℂ or some other system). The symbol F will be reserved for this purpose throughout the rest of the book; and, whenever the symbol F appears, the reader will understand, without any further explanation being necessary, that F stands for the (general) system of scalars in use.
    As suggested above, we want a body of theory that applies to any sensible system of scalars. So it is not our intention that F may stand for any algebraic system whatever. Indeed all the generality that we want can be achieved while stipulating that the system F must have all the basic nice properties of ℝ and ℂ. To be precise, F must be what mathematicians call a field. This is not the place to explain in full detail what this technical term “field” means. Suffice it to say that, in demanding that F be a field, we are demanding in particular that: (i) it is possible to add, subtract and multiply the “scalars” in F; (ii) these operations obey the same basic “laws of algebra” as real numbers (e.g. it must be the case that α(β + γ) = αβ + αγ for all α, β, γF); and (iii) each nonzero scalar in F has a reciprocal (1/α or α–1 ) in F
  • Fundamentals of Linear Algebra
    2 Matrix Algebra
    Linear algebra is the study of linear maps, the vector-valued functions u = L(x) of a vector variable x, having the linearity property: L(ax + by) = aL(x) +bL(y), where a, b are scalars. Such functions can be scaled and added, provided their domains and codomains are the same. We can even multiply or compose two such functions, provided the range of the first is a subset of the domain of the second. We shall see in Chapter 4 that when the domains and codomains are “finite dimensional vector spaces,” the algebra of such functions is basically the matrix algebra which we study in this chapter.
    2.1    Matrix Operations
    Let K be a given field, for example, the field ℝ of real numbers. If m, n ≥ 1 are integers, an m × n matrix over the field K of scalars is an m × n array
    A =
    (
    a
    i j
    )
    =
    (
    a 11
    a
    1 n
    a
    m 1
    a
    m n
    )
    with aij in K. The scalar aij is called the ij-th or (i, j)-th entry of A. We call m × n the size of the matrix A. Note that m × n is not the same size as n × m, unless m = n. A square matrix of size n is an n × n matrix. Two matrices A = (aij ), B = (bij ) over K are equal if and only if i) they are of the same size, and ii) aij = bij for all i, j.
    The square matrix A = (aij ) of size n with
    a
    i j
    =
    {
    1 if i = j
    0 otherwise
    (2.1)
    is called the identity matrix of size n and is denoted by In or simply I if its size is clear from the context. The m × n matrix, not necessarily a square one, with every entry equal to zero is denoted by 0, and is called the zero matrix. A square matrix D = (dij ) with dij = 0 if ij is called the diagonal matrix. A square matrix A = (aij ) is upper triangular if every entry below the diagonal is zero, i.e. aij = 0 if i > j. It is lower triangular aij = 0 for i < j.
    Suppose A = aij is an m × n matrix. Its j-th column is the m × 1 matrix
    (
    a
    i j
    a
    m j
    )
    and its i-th row is the 1 × n matrix (ai 1 …ain ). Thus A has m rows and n
  • Linear Algebra with Applications
    • Roger Baker, Kenneth Kuttler(Authors)
    • 2014(Publication Date)
    • WSPC
      (Publisher)
    12 = 2, etc.
    We shall often need to discuss the tuples which occur as rows and columns of a given matrix A. When you see
    (a1 ··· a
    p
    ),
    this equation tells you that A has p columns and that column j is written as a
    j
    . Similarly, if you see
    this equation reveals that A has q rows and that row i is written r
    i
    .
    For example, if we could write and
    where r1 = (1 2), r2 = (3 2), and r3 = (1 −2).
    There are various operations which are done on matrices. Matrices can be added, multiplied by a scalar, and multiplied by other matrices. To illustrate scalar multiplication, consider the following example in which a matrix is being multiplied by the scalar, 3.
    The new matrix is obtained by multiplying every entry of the original matrix by the given scalar. If A is an m × n matrix, −A is defined to equal (−1) A.
    Two matrices must be the same size to be added. The sum of two matrices is a matrix which is obtained by adding the corresponding entries. Thus Two matrices are equal exactly when they are the same size and the corresponding entries are identical. Thus
    because they are different sizes. As noted above, you write (cij ) for the matrix C whose ijth entry is cij . In doing arithmetic with matrices you must define what happens in terms of the cij , sometimes called the entries of the matrix or the components
  • Basic Matrix Theory
    2

    Elementary Matrix Operations

    2.1 Systems of Linear Equations

    At the beginning of Chapter 1 , reference was made to the widely used concept of a system of linear equations. It shall be the purpose of this chapter to consider the problem of how to find a solution for such a given system. First this will be done by an ordinary method, then the language of matrices will be used to simplify and organize the process of finding the solution. This approach will give a motivation for the important matrix theory that is developed. The entire discussion will be used in a later chapter to gain a better understanding of the numerical techniques for finding the solution of a system of linear equations and for finding the inverse of a matrix.
    It will be necessary from time to time to refer to what will be called the general system of linear equations. This will be written in the following form:
    Notice that there are m unknowns indicated, written in the same order in each of the n equations. The coefficients will be assumed to be real numbers, positive, negative, or zero. In case an unknown is missing in an equation, it is written with a zero coefficient. Another important thing to observe is that only the multiples of the unknowns are shown on the left so that the constant term is always on the right.
    Using concepts of the first chapter, these equations can be thought of as equality of components of two vectors. Thus, one can express the system in terms of equality of two vectors. If these are assumed to be column vectors, the system can be written in the form
    A closer inspection of the vector on the left shows that each component is a product of a row vector of a matrix by the same column vector. In other words, the vector equation can be written as the matrix equation
    If the dimensions of the matrices involved are known, this could be simplified to the compact form
    where A = (ars ) is an n × m matrix, X is an m × 1 column matrix, and G is an n × 1 column matrix. Furthermore, the components of the row vectors of A are the same as the coefficients of the unknowns in the system of equations. This of course can be done only if the unknowns are written in the same order in each equation with positive, negative, or zero coefficients. The matrix A is called the coefficient matrix of the system of equations. The vector G is known as the constant vector
  • Linear Programming: An Introduction to Finite Improvement Algorithms
    ).
         Now that you know how to multiply an m × n matrix A by an n × p matrix B, it is instructive to compare the multiplication of matrices with the multiplication of real numbers. For instance, if a and b are real numbers, then it is always the case that ab = ba. On the other hand, for matrices, it may not always be possible to compute BA because the number of elements in a row of B might not be the same as the number of elements in a column of A. The one exception is when A and B are both square matrices (i.e., n × n matrices). Even in this case, AB might not be equal to BA, as the next example shows:
         In the real number system, the number 1 has the property that, for all real numbers a, (a)1 = 1(a) = a. The corresponding notion for matrices is the n × n identity matrix I defined by
    It has the property that, for any m × n matrix A, AI = A, and, for any n × n matrix B, IB = BI = B. The symbol “I” will be used for the identity matrix, and its dimension will be determined by the context; e.g., AI means that I is n × n, and IA means that I is m × m. Other similarities between matrices
    TABLE 4.2. Properties of Matrix Operations
    a
    Matrix addition Matrix multiplication
    1. A + B = B + A 7. AI = A, IB = B
    2. A + 0 = 0 + A = A 8. A0 = 0A = 0
    3. (A + B) + C = A + (B + C) 9. A0 = 0
    4. t(A + B) = (tA) + (tB) 10. (AB)C = A(BC)
    5. (A + B)x = (Ax) + (Bx) 11. t(AB) = (tA)B = A(tB)
    6. A(x + y) = (Ax) + (Ay) 12. A(B + C) = (AB) + (AC)
    a
    A, B, and C are matrices, x and y are vectors, and t is a scalar.
    and real numbers are summarized in Table 4.2 . The justification of the statements in Table 4.2 is left to the reader (see Exercise 4.2.3 ).
         This section has introduced matrices and has shown the various operations that can be performed on them. The next two sections show how and when matrices can be used to solve n linear equations in n
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.