Mathematics

Linear Systems

Linear systems refer to a set of linear equations involving multiple variables. These systems can be solved using various methods, such as substitution, elimination, or matrix operations. The solutions to linear systems represent the points of intersection between the equations, providing valuable insights into the relationships between the variables involved.

Written by Perlego with AI-assistance

7 Key excerpts on "Linear Systems"

  • Numerical Methods for Chemical Engineers Using Excel, VBA, and MATLAB
    • Victor J. Law(Author)
    • 2013(Publication Date)
    • CRC Press
      (Publisher)

    3 Linear Algebra and Systems of Linear Equations

    3.1 INTRODUCTION

    Linear algebra is a topic that many students of numerical methods will have been exposed to in mathematics classes. In this chapter, a brief review of linear algebra is given along with numerical methods for solving problems that are common in engineering and scientific applications. Linear algebra involves the manipulation of linear relationships and usually involves the use of vectors and matrices. The most common problem class of linear algebra is the solution of a set of linear algebraic equations.

    3.2 NOTATION

    • Scalars are indicated by a lowercase letter. • Vectors are also identified by a lowercase letter. The context distinguishes between a scalar and a vector. • Matrices are designated by a capital letter.
    • By default, vectors are column vectors. To show a row vector, the transpose operator (a superscript T ) is used (as in x T ).
    • When necessary, the dimensions of matrices and vectors are shown by scalar subscripts. For example, A m × n indicates a matrix with m rows and n columns. Further, y n ×1 designates a column vector of n elements.
    Definition: The equation
    where a , b , c , d , and h are known numbers, while x , y , z , and w are unknown numbers (variables), is called a linear equation . If h = 0, the linear equation is said to be homogenous . A linear system is a set of linear equations, and a homogenous linear system is a set of homogenous linear equations.
    For example, is a linear system. But
    is a nonlinear system (because of ).
    The system is a homogenous linear system. Vectors and matrices offer a convenient, compact way of representing, manipulating, and solving Linear Systems. These are introduced in the next few sections.

    3.3 VECTORS

    A vector is an ordered set of numbers arranged as a column (the default). An m
  • Linear Algebra
    eBook - ePub

    Linear Algebra

    A Minimal Polynomial Approach to Eigen Theory

    • Fernando Barrera-Mora(Author)
    • 2023(Publication Date)
    • De Gruyter
      (Publisher)
    1  Systems of linear equations
    In a wide range of mathematical problems, applied or not, the approach to their solution is based on the study of a system of linear equations. For instance, in Economics, Wassily W. Leontief,1 using a system of linear equations, proposed an economic model to describe the economic activity of the USA. Roughly speaking, Leontief’s model consists in dividing the USA economical system in 500 consortia. Starting from this, a system of linear equations is formulated to describe the way in which a consortium distributes its production among the others, see Example 1.1.2 .
    From the mathematical point of view, some problems are formulated as a system of linear equations, while many more can be formulated using one or more functions, which, under good assumptions, are represented linearly, bringing linear algebra into action. These aspects show the importance of starting a systematic and deep study of the solutions of a system of linear equations. In order to achieve this, a conceptual discussion of such ideas is in order. We will need to discuss concepts such as: vector space, basis, dimension, linear transformations, eigenvalues, eigenvectors, determinants, among others.
    We will start by presenting some examples that will illustrate the use of systems of linear equations when approaching a problem.

    1.1  Examples

    In this section we present some examples that illustrate the use of a system of linear equations to approach hypothetical problematic situations. This idea will lead us to discuss terms and results that can help us understand how linear equations are used to model some mathematical problems.

    Example 1.1.1.

    Assume that there are two companies,
    E 1
    and
    E 2
    , and that each company manufactures products,
    B 1
    and
    B 2
    . Furthermore, assume that for each monetary unit invested, the production is as described in Table 1.1
  • Mathematics for Economics and Finance
    • Michael Harrison, Patrick Waldron(Authors)
    • 2011(Publication Date)
    • Routledge
      (Publisher)

    Systems of linear equations and matrices

    DOI: 10.4324/9780203829998-3

    1.1   Introduction

    This chapter focuses on matrices. It begins by discussing linear relationships and systems of linear equations, and then introduces the matrix concept as a tool for helping to handle and analyse such systems. Several examples of how matrices might arise in specific economic applications are given to motivate the mathematical detail that follows. These examples will be used again and further developed later in the book. The mathematical material that follows the examples comprises discussions of matrix operations, the rules of matrix algebra, and a taxonomy of special types of matrix encountered in economic and financial applications.

    1.2    Linear equations and examples

    Linear algebra is a body of mathematics that helps us to handle, analyse and solve systems of linear relationships. A great deal of economics and finance makes use of such linear relationships. A linear relationship may be represented by an equation of the form
    z = α x + β y
    1.1
    where x, y and z are variables and α and ß are constants. Such relationships have several nice properties. One is that they are homogeneous of degree one, or linearly homogeneous, i.e. if all variables on the right-hand side are scaled (multiplied) by a constant, θ, then the left-hand side is scaled in the same way. Specifically, using (1.1 ), we have
    z * = α
    (
    θ x
    )
    + β
    (
    θ y
    )
    = θ
    (
    α x + β y
    )
    = θ z
    1.2
    Another property of linear relationships is that, for different sets of values for their variables, they are additive and their sum is also linear. Suppose we have the two equations z1 = αx1 + ßy1 and z2 = αx2 + ßy2 , then
    z 1
    +
    z 2
    = α
    (
    x 1
    +
    x 2
    )
    + β
    (
    y 1
    +
    y 2
    )
    1.3 after slight rearrangement, which may be written as
    Z = α X + β Y
    1.4 where X = x1 + x2 , Y = y1 + y2 and Z = z1 + z2 . The result, equation (1.4) , is a linear equation in the sums of the respective variables. The generalization to the case of n equations is straightforward and has
    X =
    i = 1
    n
    x i
    , Y
    i = 1
    n
    y i
    and Z =
    i = 1
    n
    z i
    .
    These simple properties constitute one reason why linear relationships are so widely used in economics and finance, and particularly when relationships, such as demand and supply curves, are first introduced to students.1
  • Principles of Linear Algebra with Mathematica
    • Kenneth M. Shiskowski, Karl Frinkle(Authors)
    • 2013(Publication Date)
    • Wiley
      (Publisher)

    Chapter 2

    Linear Systems of Equations and Matrices

    2.1 Linear Systems of Equations

    The basic idea behind a general linear system of equations is that of a system of two xy -plane line equations ax + by = c and dx + ey = f , for real constants a through f . Such a system of two-line equations in the xy -plane has a simultaneous solution , the intersection set of the two lines. In general, we consider a simultaneous solution to be the collection of points that simultaneously satisfies all equations in a given system of equations. In the two-line equation system example we have only three possibilities. If the two lines are not parallel, then their intersection set is a single point P; hence the simultaneous solution will simply be {P }. If the two lines are parallel and distinct, then their intersection set is empty and there is no simultaneous solution. Last, if the two lines are the same, the simultaneous solution is simply the entire line, which consists of an infinite number of points. We now give an example of each of these three situations. As you will see, the solution points of each of these three systems are where each pair of lines intersect. In these systems, in order to solve by hand for the solution points, we must try to solve for one of the variables x or y by itself without the other variable. In essence, we are assuming each pair of lines intersect at a single point unless we discover otherwise.
    Example 2.1.1 . Our first example is to solve the following system:
    (2.1)
    In order to solve for one of the variables x or y by itself without the other variable, we can multiply the first equation by 3 and multiply the second equation by 2, which makes the coefficients of y negatives of each other. Now, we can add the two equations together eliminating the variable y , and so solving for x . Similarly, we can solve for y
  • What Every Engineer Should Know about Computer Modeling and Simulation
    • Don M. Ingels(Author)
    • 2021(Publication Date)
    • CRC Press
      (Publisher)
    SOLVING THE MATHEMATICAL MODEL
    Though this be madness, yet there be method in’t.
    [Shakespeare]

    5.1 INTRODUCTION

    As mentioned in Chapter 2 , writing down a set of equations might constitute a model, but equations are of little use unless they can be solved. The two basic solution approaches are: (1) analytical and (2) numerical. The analytical techniques are more informative and efficient to use, but much less general in their range of applicability, i.e., it is common to have equations with no known explicit analytic solution. These will be left to texts on mathematics [13 , 14 ]. Because of their general usefulness, the material in this book is restricted to a discussion of some useful numerical procedures that are commonly used in computer simulations.
    Two different classes of problems are encountered when modeling systems with sets of equations, whether the equations are linear or nonlinear. These two categories are:
    1. n equations in n unknowns
    2. m equations in n unknowns, m < n
    with k inequalities in n unknowns, where k can be any number. If the equations in class 1 problems are independent, they can usually be solved by methods such as those described below. Class 2 problems are solved using optimization techniques, linear programming [1 ] for Linear Systems, and nonlinear programming techniques [2 , 3 , 7 ] for nonLinear Systems.

    5.2 LINEAR ALGEBRAIC SYSTEMS

    A set of linear equations can be written as:
    a
    11
    x 1
    +
    a
    12
    x 2
    + +
    a
    1 n
    x n
    =
    b 1
    a
    21
    x 1
    +
    a
    22
    x 2
    + +
    a
    2 n
    x n
    =
    b 2
    a
    n 1
    x 1
    +
    a
    n 2
    x 2
    + +
    a
    n n
    x n
    =
    b n
    (5.1)
    where x1 … xn are the n unknowns, aij are known coefficients, and bi are known constants.
    If the number of equations is large, somewhere over four, writing them out explicitly soon becomes tedious and then they are most frequently expressed in array notation in terms of a matrix A and two vectors, x and b.
  • Linear Algebra
    eBook - ePub

    Linear Algebra

    An Inquiry-Based Approach

    • Jeff Suzuki(Author)
    • 2021(Publication Date)
    • CRC Press
      (Publisher)
    all equations are in standard form, we’ll be able to work with the vectors as easily as we could have worked with the equations.
    For example, the system of equations
    3 x + 5 y
    = 11
    2 x 7 y
    = 4
    could be represented as the vectors
    3 , 5 , 11
    and
    2 , 7 , 4
    , where the first component is the coefficient of x; the second component is the coefficient of y; and the last component is the constant.
    Actually, if we write our two equations as two vectors, we might forget that they are in fact related. To reinforce their relationship, we’ll represent them as a single object by throwing them inside a set of parentheses. We might represent the evolution of our notation as
    {
    3 x + 5 y
    = 11
    2 x 7 y
    = 4
    3 , 5 , 11
    2 , 7 , 4
    (
    3 5 11
    2
    7
    4
    )
    As a general rule, mathematicians are terrible at coming up with new words, so we appropriate existing words that convey the essential meaning of what we want to express.
    In geology, when precious objects like gemstones or fossils are embedded in a rock, the rock is called a matrix. The term is also used in biology for the structure that holds other precious objects—the internal organs—in place, and in construction for the mortar that holds bricks in place. Thus in 1850, the mathematician James Joseph Sylvester (1814–1897) appropriated the term for the structure that keeps the coefficients and constants of a linear equation in place.
    Since the matrix shown includes both the coefficients of our equation and the constants, we can describe it as the coefficient matrix augmented by a column of constants or, more simply, the augmented coefficient matrix
  • Solutions Manual to accompany Ordinary Differential Equations
    • Michael D. Greenberg(Author)
    • 2014(Publication Date)
    • Wiley
      (Publisher)

    CHAPTER 4

    Systems of Linear Differential Equations

    Section 4.1

    We began with the simplest case in Chapter 1: a single first-order equation. From there, one can, logically, proceed in two different directions: single equations of higher order, or systems of more than one first-order equations. We took the former path in Chapter 2, and in this chapter we take the latter. In both cases, the step up in complexity, from first-order equations, is so great that, unlike Chapter 1, we back off and consider only the case of linear equations. Not until Chapter 7 do we consider higher-order nonlinear equations.
    Just as matrix theory is the natural format for the study of systems of linear algebraic equations, it is also the natural format for the study of systems of linear differential equations, and that discussion begins in Section 4.3. In this first section, Section 4.1, we establish three things: the idea of systems and some of the relevant definitions, a fundamental theorem for the existence and uniqueness of solutions, and an elementary solution method by elimination. The latter method takes us from two given coupled first-order equations to uncoupled second-order equations, thereby bringing us back to single equations of higher order, which are by now familiar territory, from Chapter 2. That idea proves simple and convenient for systems of two equations, but for more than two equations the matrix methods developed in subsequent sections will be more convenient.

    EXAMPLES

    Example 1. (Solution by elimination) Solve for the general solution of the following system, by elimination:
    (A1)
    (A2)
    SOLUTION. We begin either by solving (Al) by algebra for y, or (A2) by algebra for x; either will work. The former gives y = (x '–x )/ 2, and substituting that into (A2) for the y and y
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.