Author Biographies xiList of Figures xiiiList of Tables xviiPreface xixPart I Human-robot Interaction Control 11 Introduction 31.1 Human-Robot Interaction Control 31.2 Reinforcement Learning for Control 61.3 Structure of the Book 7References 102 Environment Model of Human-Robot Interaction 172.1 Impedance and Admittance 172.2 Impedance Model for Human-Robot Interaction 212.3 Identification of Human-Robot Interaction Model 242.4 Conclusions 30References 303 Model Based Human-Robot Interaction Control 333.1 Task Space Impedance/Admittance Control 333.2 Joint Space Impedance Control 363.3 Accuracy and Robustness 373.4 Simulations 393.5 Conclusions 42References 444 Model Free Human-Robot Interaction Control 454.1 Task-Space Control Using Joint-Space Dynamics 454.2 Task-Space Control Using Task-Space Dynamics 524.3 Joint Space Control 534.4 Simulations 544.5 Experiments 554.6 Conclusions 68References 715 Human-in-the-loop Control Using Euler Angles 735.1 Introduction 735.2 Joint-Space Control 745.3 Task-Space Control 795.4 Experiments 835.5 Conclusions 92References 94Part II Reinforcement Learning for Robot Interaction Control 976 Reinforcement Learning for Robot Position/Force Control 996.1 Introduction 996.2 Position/Force Control Using an Impedance Model 1006.3 Reinforcement Learning Based Position/Force Control 1036.4 Simulations and Experiments 1106.5 Conclusions 117References 1177 Continuous-Time Reinforcement Learning for Force Control 1197.1 Introduction 1197.2 K-means Clustering for Reinforcement Learning 1207.3 Position/Force Control Using Reinforcement Learning 1247.4 Experiments 1307.5 Conclusions 136References 1368 Robot Control in Worst-Case Uncertainty Using Reinforcement Learning 1398.1 Introduction 1398.2 Robust Control Using Discrete-Time Reinforcement Learning 1418.3 Double Q-Learning with k-Nearest Neighbors 1448.4 Robust Control Using Continuous-Time Reinforcement Learning 1508.5 Simulations and Experiments: Discrete-Time Case 1548.6 Simulations and Experiments: Continuous-Time Case 1618.7 Conclusions 170References 1709 Redundant Robots Control Using Multi-Agent Reinforcement Learning 1739.1 Introduction 1739.2 Redundant Robot Control 1759.3 Multi-Agent Reinforcement Learning for Redundant Robot Control 1799.4 Simulations and experiments 1839.5 Conclusions 187References 18910 Robot H2 Neural Control Using Reinforcement Learning 19310.1 Introduction 19310.2 H2 Neural Control Using Discrete-Time Reinforcement Learning 19410.3 H2 Neural Control in Continuous Time 20710.4 Examples 21910.5 Conclusion 229References 22911 Conclusions 233A Robot Kinematics and Dynamics 235A.1 Kinematics 235A.2 Dynamics 237A.3 Examples 240References 246B Reinforcement Learning for Control 247B.1 Markov decision processes 247B.2 Value functions 248B.3 Iterations 250B.4 TD learning 251Reference 258Index 259
WEN YU, PhD, is Professor and Head of the Departamento de Control Automático with the Centro de Investigación y de Estudios Avanzados, Instituto Politécnico Nacional (CINVESTAV-IPN), Mexico City, Mexico. He is a co-author of Modeling and Control of Uncertain Nonlinear Systems with Fuzzy Equations and Z-Number.ADOLFO PERRUSQUÍA, PhD, is a Research Fellow in the School of Aerospace, Transport, and Manufacturing at Cranfield University in Bedford, UK.