Continuous Time Dynamical Systems
eBook - ePub

Continuous Time Dynamical Systems

State Estimation and Optimal Control with Orthogonal Functions

  1. 247 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Continuous Time Dynamical Systems

State Estimation and Optimal Control with Orthogonal Functions

Book details
Book preview
Table of contents
Citations

About This Book

Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost functional.

This book, Continuous Time Dynamical Systems: State Estimation and Optimal Control with Orthogonal Functions, considers different classes of systems with quadratic performance criteria. It then attempts to find the optimal control law for each class of systems using orthogonal functions that can optimize the given performance criteria.

Illustrated throughout with detailed examples, the book covers topics including:



  • Block-pulse functions and shifted Legendre polynomials
  • State estimation of linear time-invariant systems
  • Linear optimal control systems incorporating observers
  • Optimal control of systems described by integro-differential equations
  • Linear-quadratic-Gaussian control
  • Optimal control of singular systems
  • Optimal control of time-delay systems with and without reverse time terms
  • Optimal control of second-order nonlinear systems
  • Hierarchical control of linear time-invariant and time-varying systems

Frequently asked questions

Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Continuous Time Dynamical Systems by B.M. Mohan, S.K. Kar in PDF and/or ePUB format, as well as other popular books in Mathematics & Applied Mathematics. We have over one million books available in our catalogue for you to explore.

Information

Publisher
CRC Press
Year
2018
ISBN
9781351832236
Edition
1
Chapter 1
Introduction
The optimal control problem is introduced first. Then optimal control problems of varieties of systems and their solution via different classes of orthogonal functions (OFs) are discussed in the literature review section. The objectives and contributions of the book are stated. The organization of the book is given in the last section.
1.1 Optimal Control Problem
A particular type of system design problem is the problem of ā€œcontrollingā€ a system. The translation of control system design objectives into the mathematical language gives rise to the control problem. The essential elements of the control problem are
ā€¢ A desired output of the system.
ā€¢ A set of admissible inputs or ā€œcontrols.ā€
ā€¢ A performance or cost functional which measures the effectiveness of a given ā€œcontrol action.ā€
The objective of the system is often translated into a requirement on the output. Since ā€œcontrolā€ signals in physical systems are usually obtained from equipment which can provide only a limited amount of force or energy, constraints are imposed upon the inputs to the system. These constraints lead to a set of admissible inputs.
Frequently the desired objectives can be attained by many admissible inputs, so the engineer seeks a measure of performance or cost of control which will allow him/her to choose the ā€œbestā€ input. The choice of a mathematical performance functional is a subjective matter. Moreover, the cost functional will depend upon the desired behaviour of the system. Most of the time, the cost functional chosen will depend upon the input and the pertinent system variables. When a cost functional has been decided upon, the engineer formulates his/her control problem as follows:
Determine the admissible inputs which generate the desired output and which optimize the chosen performance measure.
At this point, optimal control theory enters the picture to aid the engineer in finding a solution to his/her control problem. Such a solution, when it exists, is called an optimal control. Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost functional. The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic functional is called a linear quadratic problem. One of the main results in the theory is that the solution is provided by the linear-quadratic-regulator (LQR).
In this book different classes of systems are considered with quadratic performance criteria and an attempt is made to find the optimal control law for each class of systems using OFs that can optimize the given performance criteria.
1.2 Historical Perspective
The Legendre polynomials (LPs) originated from determining the force of attraction exerted by solids of revolution [5], and their orthogonal properties were established by Adrian Marie Legendre during 1784ā€“1790. There is another family of OFs known as piecewise constant basis functions whose functional values are constant within any subinterval of time period. There is a class of complete OFs known as block-pulse functions (BPFs) which is more popular and elegant in the areas of parameter estimation, analysis and control.
Control of linear systems by minimizing a quadratic performance index gives rise to a time-varying gain for the linear state feedback, and this gain is obtained by solving a matrix Riccati differential equation [3]. Probably, Chen and Hsiao, 1975, were the first who applied a class of piecewise constant OFs, i.e. Walsh functions (WFs), obtained a numerical solution of the matrix Riccati equation [6] and found the time-varying gain. Then many researchers started investigating the problems of identification, analysis and control using different classes of OFs. The operational matrix for integration of BPFs was derived [8]. Moreover, it was shown that BPFs are more fundamental than WFs and the structure of the integration operational matrix of BPFs is simpler than that of WFs. In [9] it was shown that optimal control problems could be solved using BPFs with minimal computational effort.
In the last three and a half decades, the OF approach was successfully applied to study varieties of problems in systems and control [22, 57, 59, 61]. The key feature of OFs is that it converts differential or integral equations into algebraic equations in the sense of least squares. So this approach became quite popular computationally as the dynamical equations of a system can be converted into a set of algebraic equations whose solution leads to the solution of the problem.
Before going into the details of the optimal control problem, we first take a look at the problem of estimation of state variables as the state estimation plays an important role in the context of state feedback control. State feedback control system design requires the knowledge of the state vector of the plant. Sometimes, no state variables or only a few state variables are available for measurement. In such cases an observer, either full order or reduced order depending on the situation, is incorporated to estimate the unknown state variables if the plant is observable. In general, it is estimated using the Luenberger observer [1]. But the Luenberger observer produces erroneous estimates in a noisy environment unless the measurement noise is filtered out. Interestingly, the OF approach has an inherent filtering property [59] as it involves an integration process which has the smoothing effect. As it appears from the literature, two attempts have been made on the state estimation problem by using two different classes of OFs, i.e. BPFs [23] and shifted Chebyshev polynomials of first kind (SCP1s)[39] so far. It is observed that the BPF approach [23] is purely recursive and uses multiple integration. The number of integrations increases as the order of the system increases, i.e for an nth order system the state equation has to be integrated n times, which is computationally not attractive.
Next, coming to the SCP1 approach [39], integration operational matrix of SCP1s is less sparse than that of shifted Legendre polynomials (SLPs). So if we use SLPs to develop algorithms, it will obviously be more elegant computationally. Moreover, state estimation cannot be done via SCP1s in a noisy environment.
The problem of optimal control incorporating observers has been successfully studied via different classes of OFs, namely BPFs [19], SLPs [30, 47], shifted Jacobi polynomials (SJPs) [33], general orthogonal polynomials (GOPs) [41], sine-cosine functions (SCFs) [44, 47], SCP1s [25, 47], shifted Chebyshev polynomials of the second kind (SCP2s) [47] and single-term Walsh series [52]. The approach followed in [25, 30, 33, 41, 44, 47] is nonrecursive, while it is recursive in [19, 52], making the approach in [25, 30, 33, 41, 44, 47] computationally not attractive.
Synthesis of optimal control laws for deterministic systems described by integro-differential equations has been investigated [11] via the dynamic programming approach. Subsequently, this problem has been studied via BPFs [37], SLPs [34, 43], and SCP1s [42].
The linear-quadratic-Gaussian (LQG) control problem [4] concerns linear systems disturbed by additive white Gaussian noise, incomplete state information and quadratic costs. The LQG controller is simply the combination of a linear-quadratic-estimator (LQE), i.e. Kalman filter, and an LQR. The separation principle guarantees that these can be designed and computed independently. In [35] the solution of the LQG control design problem has been obtained by employing GOPs. By using the GOPs the nonlinear Riccati differential equations have been reduced to nonlinear algebraic equations. The set of nonlinear algebraic equations has been solved to get the solutions. The above approach is neither simple nor elegant computationally, as nonlinear equations are involved.
Singular systems have been of considerable importance as they are often encountered in many areas. Singular systems arise naturally in describing large-scale systems [53]; examples occur in power and interconnected systems. In general, an interconnection of state variable subsystems is conveniently described as a singular system. The singular system is called generalized state-space system, implicit system, semi-state system, or descriptor system. Optimal control of singular systems has been discussed in [15] and [18]. In [38] the necessary conditions for the existence of optimal control have been derived. It has been shown that the optimal control design problem reduces to a two-point boundary-value (TPBV) problem for the determination of t...

Table of contents

  1. Cover
  2. Title Page
  3. Copyright Page
  4. Dedication
  5. Table of Contents
  6. List of Abbreviations
  7. Preface
  8. Acknowledgements
  9. About the Authors
  10. 1 Introduction
  11. 2 Orthogonal Functions and Their Properties
  12. 3 State Estimation
  13. 4 Linear Optimal Control Systems Incorporating Observers
  14. 5 Optimal Control of Systems Described by Integro-Differential Equations
  15. 6 Linear-Quadratic-Gaussian Control
  16. 7 Optimal Control of Singular Systems
  17. 8 Optimal Control of Time-Delay Systems
  18. 9 Optimal Control of Nonlinear Systems
  19. 10 Hierarchical Control of Linear Systems
  20. 11 Epilogue
  21. Bibliography
  22. Index