In this chapter, we present some backgrounds about robots, and discussions on the robotic manipulation learning algorithms. Then imitation learning methods are detailed described. We present the current studies on the wearable demonstration that the demonstrations are built by the wearable devices for imitation learning. In addition, we also discuss some existing challenges. The organization of this chapter is listed as follows.
1.1 Background
1.2 State-of-the-art of robotic manipulation learning
1.3 State-of-the-art of imitation learning
1.4 State-of-the-art of wearable demonstration
1.5 Existing challenges
1.6 Summary
Chapter 2: Wearable inertial device
In this chapter, we present the background about wearable inertial device, and detailed descriptions on the inertial sensors. The calibration methods of sensors and wearable device are developed. The wearable computing algorithms are presented. The experimental results show the performance of the wearable inertial device. The organization of this chapter is listed as follows.
2.1 Background
2.2 inertial sensors
2.3 Sensor calibration
2.4 Wearable calibration
2.5 Wearable computing
2.6 Experimental results
2.7 Summary
Chapter 3: Robotic manipulation learning from indirect demonstration
In this chapter, the background of the demonstrations is presented, and the indirect demonstrations are introduced. Then demonstration datasets and the experiments of indirect manipulation demonstration using proposed wearable device are described. And we propose the robotic manipulation learning method by integrating the crucial experience in demonstrations. Finally, we verify the developed methods via both simulations and experiments by grasping various shapes of objects. The organization of this chapter is listed as follows.
3.1 Background
3.2 Indirect demonstration
3.3 Learning method
3.4 Experimental results
3.5 Summary
Chapter 4: Robotic manipulation learning from direct demonstration
In this chapter, we provide an overview of direct demonstration. And we exploit the intrinsic relation between human and robot, then develop a novel mapping method thatthe operator’s fingers are used for robotic hand teleoperation and the arms with palm are used for robotic arm teleoperation. Then a rotation invariant dynamical movement primitive method is presented for learning the operation skills. Finally, the effectiveness of the proposed human experience learning system is evaluated by experiments. The organization of this chapter is listed as follows.
5.1 Background
5.2 Direct demonstration
5.3 Learning policy
5.4 Experimental results
5.5 Summary
Chapter 5: Vision-based learning for robotic manipulation
In this chapter, an overview of the vision-based robotic manipulation is investigated. Then an end-to-end learning method is presented for learning the operation skills. Finally, the effectiveness of the proposed learning system is evaluated by experiments. The organization of this chapter is listed as follows.
6.1 Background
6.2 Vision-based learning method
6.3 Experimental results
6.4 Summary
Chapter 7: Conclusions
7.1 Summary
7.2 Future work
Bin Fang is an Assistant Researcher at the Department of Computer Science and Technology, Tsinghua University. His main research interests include wearable devices and human-robot interaction. He is a leader guest editor for a number of journals, including Frontiers in Neurorobotics, and Frontiers in Robotics and AI, and has served as an associate editor for various journals and conferences, e.g. the International Journal of Advanced Robotic Systems, and the IEEE International Conference on Advanced Robotics and Mechatronics.
Fuchun Sun is a Full Professor at the Department of Computer Science and Technology, Tsinghua University. A recipient of the National Science Fund for Distinguished Young Scholars, his main research interests include intelligent control and robotics. He serves as an associate editor for a number of international journals, including IEEE Transactions on Systems, Man and Cybernetics: Systems, IEEE Transactions on Fuzzy Systems, and Mechatronics, Robotics and Autonomous Systems.
Huaping Liu is an Associate Professor at the Department of Computer Science and Technology, Tsinghua University. His main research interests include robotic perception and learning. He serves as an associate editor for various journals, including IEEE Transactions on Automation Science and Engineering, IEEE Transactions on Industrial Informatics, IEEE Robotics & Automation Letters, Neurocomputing, and Cognitive Computation.
Chunfang Liu is an Assistant Professor at the Department of Artificial Intelligence and Automation, Beijing University of Technology. Her research interests include intelligent robotics and vision.
Di Guo received her Ph.D. degree from the Department of Computer Science and Technology, Tsinghua University, Beijing, in 2017. Her research interests include robotic manipulation and sensor fusion.
Over the next few decades, millions of people, with varying backgrounds and levels of technical expertise, will have to effectively interact with robotic technologies on a daily basis. This means it will have to be possible to modify robot behavior without explicitly writing code, but instead via a small number of wearable devices or visual demonstrations. At the same time, robots will need to infer and predict humans’ intentions and internal objectives on the basis of past interactions in order to provide assistance before it is explicitly requested; this is the basis of imitation learning for robotics.
This book introduces readers to robotic imitation learning based on human demonstration with wearable devices. It presents an advanced calibration method for wearable sensors and fusion approaches under the Kalman filter framework, as well as a novel wearable device for capturing gestures and other motions. Furthermore it describes the wearable-device-based and vision-based imitation learning method for robotic manipulation, making it a valuable reference guide for graduate students with a basic knowledge of machine learning, and for researchers interested in wearable computing and robotic learning.