About the Authors xiPreface xiiiAcronyms xvIntroduction xvii1 Nonlinear Systems Analysis 11.1 Notation 11.2 Nonlinear Dynamical Systems 21.2.1 Remarks on Existence, Uniqueness, and Continuation of Solutions 21.3 Lyapunov Analysis of Stability 31.4 Stability Analysis of Discrete Time Dynamical Systems 71.5 Summary 10Bibliography 102 Optimal Control 112.1 Problem Formulation 112.2 Dynamic Programming 122.2.1 Principle of Optimality 122.2.2 Hamilton-Jacobi-Bellman Equation 142.2.3 A Sufficient Condition for Optimality 152.2.4 Infinite-Horizon Problems 162.3 Linear Quadratic Regulator 182.3.1 Differential Riccati Equation 182.3.2 Algebraic Riccati Equation 232.3.3 Convergence of Solutions to the Differential Riccati Equation 262.3.4 Forward Propagation of the Differential Riccati Equation for Linear Quadratic Regulator 282.4 Summary 30Bibliography 303 Reinforcement Learning 333.1 Control-Affine Systems with Quadratic Costs 333.2 Exact Policy Iteration 353.2.1 Linear Quadratic Regulator 393.3 Policy Iteration with Unknown Dynamics and Function Approximations 413.3.1 Linear Quadratic Regulator with Unknown Dynamics 463.4 Summary 47Bibliography 484 Learning of Dynamic Models 514.1 Introduction 514.1.1 Autonomous Systems 514.1.2 Control Systems 514.2 Model Selection 524.2.1 Gray-Box vs. Black-Box 524.2.2 Parametric vs. Nonparametric 524.3 Parametric Model 544.3.1 Model in Terms of Bases 544.3.2 Data Collection 554.3.3 Learning of Control Systems 554.4 Parametric Learning Algorithms 564.4.1 Least Squares 564.4.2 Recursive Least Squares 574.4.3 Gradient Descent 594.4.4 Sparse Regression 604.5 Persistence of Excitation 604.6 Python Toolbox 614.6.1 Configurations 624.6.2 Model Update 624.6.3 Model Validation 634.7 Comparison Results 644.7.1 Convergence of Parameters 654.7.2 Error Analysis 674.7.3 Runtime Results 694.8 Summary 73Bibliography 755 Structured Online Learning-Based Control of Continuous-Time Nonlinear Systems 775.1 Introduction 775.2 A Structured Approximate Optimal Control Framework 775.3 Local Stability and Optimality Analysis 815.3.1 Linear Quadratic Regulator 815.3.2 SOL Control 825.4 SOL Algorithm 835.4.1 ODE Solver and Control Update 845.4.2 Identified Model Update 855.4.3 Database Update 855.4.4 Limitations and Implementation Considerations 865.4.5 Asymptotic Convergence with Approximate Dynamics 875.5 Simulation Results 875.5.1 Systems Identifiable in Terms of a Given Set of Bases 885.5.2 Systems to Be Approximated by a Given Set of Bases 915.5.3 Comparison Results 985.6 Summary 99Bibliography 996 A Structured Online Learning Approach to Nonlinear Tracking with Unknown Dynamics 1036.1 Introduction 1036.2 A Structured Online Learning for Tracking Control 1046.2.1 Stability and Optimality in the Linear Case 1086.3 Learning-based Tracking Control Using SOL 1116.4 Simulation Results 1126.4.1 Tracking Control of the Pendulum 1136.4.2 Synchronization of Chaotic Lorenz System 1146.5 Summary 115Bibliography 1187 Piecewise Learning and Control with Stability Guarantees 1217.1 Introduction 1217.2 Problem Formulation 1227.3 The Piecewise Learning and Control Framework 1227.3.1 System Identification 1237.3.2 Database 1247.3.3 Feedback Control 1257.4 Analysis of Uncertainty Bounds 1257.4.1 Quadratic Programs for Bounding Errors 1267.5 Stability Verification for Piecewise-Affine Learning and Control 1297.5.1 Piecewise Affine Models 1297.5.2 MIQP-based Stability Verification of PWA Systems 1307.5.3 Convergence of ACCPM 1337.6 Numerical Results 1347.6.1 Pendulum System 1347.6.2 Dynamic Vehicle System with Skidding 1387.6.3 Comparison of Runtime Results 1407.7 Summary 142Bibliography 1438 An Application to Solar Photovoltaic Systems 1478.1 Introduction 1478.2 Problem Statement 1508.2.1 PV Array Model 1518.2.2 DC-D C Boost Converter 1528.3 Optimal Control of PV Array 1548.3.1 Maximum Power Point Tracking Control 1568.3.2 Reference Voltage Tracking Control 1628.3.3 Piecewise Learning Control 1648.4 Application Considerations 1658.4.1 Partial Derivative Approximation Procedure 1658.4.2 Partial Shading Effect 1678.5 Simulation Results 1708.5.1 Model and Control Verification 1738.5.2 Comparative Results 1748.5.3 Model-Free Approach Results 1768.5.4 Piecewise Learning Results 1788.5.5 Partial Shading Results 1798.6 Summary 182Bibliography 1829 An Application to Low-level Control of Quadrotors 1879.1 Introduction 1879.2 Quadrotor Model 1899.3 Structured Online Learning with RLS Identifier on Quadrotor 1909.3.1 Learning Procedure 1919.3.2 Asymptotic Convergence with Uncertain Dynamics 1959.3.3 Computational Properties 1959.4 Numerical Results 1979.5 Summary 201Bibliography 20110 Python Toolbox 20510.1 Overview 20510.2 User Inputs 20510.2.1 Process 20610.2.2 Objective 20710.3 SOL 20710.3.1 Model Update 20810.3.2 Database 20810.3.3 Library 21010.3.4 Control 21010.4 Display and Outputs 21110.4.1 Graphs and Printouts 21310.4.2 3D Simulation 21310.5 Summary 214Bibliography 214A Appendix 215A.1 Supplementary Analysis of Remark 5.4 215A.2 Supplementary Analysis of Remark 5.5 222Index 223
Milad Farsi received the B.S. degree in Electrical Engineering (Electronics) from the University of Tabriz in 2010. He obtained his M.S. degree also in Electrical Engineering (Control Systems) from the Sahand University of Technology in 2013. Moreover, he gained industrial experience as a Control System Engineer between 2012 and 2016. Later, he acquired the Ph.D. degree in Applied Mathematics from the University of Waterloo, Canada, in 2022, and he is currently a Postdoctoral Fellow at the same institution. His research interests include control systems, reinforcement learning, and their applications in robotics and power electronics.Jun Liu received the Ph.D. degree in Applied Mathematics from the University of Waterloo, Canada, in 2010. He is currently an Associate Professor of Applied Mathematics and a Canada Research Chair in Hybrid Systems and Control at the University of Waterloo, Canada, where he directs the Hybrid Systems Laboratory. From 2012 to 2015, he was a Lecturer in Control and Systems Engineering at the University of Sheffield. During 2011 and 2012, he was a Postdoctoral Scholar in Control and Dynamical Systems at the California Institute of Technology. His main research interests are in the theory and applications of hybrid systems and control, including rigorous computational methods for control design with applications in cyber-physical systems and robotics.