1 Affordable and Reliable Autonomous Driving Through Modular Design 11.1 Introduction 11.2 High Cost of Autonomous Driving Technologies 21.2.1 Sensing 21.2.2 HD Map Creation and Maintenance 31.2.3 Computing Systems 31.3 Achieving Affordability and Reliability 41.3.1 Sensor Fusion 41.3.2 Modular Design 51.3.3 Extending Existing Digital Maps 51.4 Modular Design 61.4.1 Communication System 71.4.2 Chassis 71.4.3 mmWave Radar and Sonar for Passive Perception 81.4.4 GNSS for Localization 81.4.5 Computer Vision for Active Perception and Localization 81.4.6 Planning and Control 81.4.7 Mapping 91.5 The Rest of the Book 91.6 Open Source Projects Used in this Book 10References 112 In-Vehicle Communication Systems 132.1 Introduction 132.2 CAN 132.3 FlexRay 162.3.1 FlexRay Topology 162.3.2 The FlexRay Communication Protocol 172.4 CANopen 182.4.1 Object Dictionary 192.4.2 Profile Family 192.4.3 Data Transmission and Network Management 202.4.4 Communication Models 212.4.5 CANopenNode 21References 223 Chassis Technologies for Autonomous Robots and Vehicles 233.1 Introduction 233.2 Throttle-by-Wire 233.3 Brake-by-Wire 253.4 Steer-by-Wire 253.5 Open Source Car Control 263.5.1 OSCC APIs 263.5.2 Hardware 273.5.3 Firmware 283.6 OpenCaret 293.6.1 OSCC Throttle 293.6.2 OSCC Brake 293.6.3 OSCC Steering 293.7 PerceptIn Chassis Software Adaptation Layer 30References 344 Passive Perception with Sonar and Millimeter Wave Radar 354.1 Introduction 354.2 The Fundamentals of mmWave Radar 354.2.1 Range Measurement 364.2.2 Velocity Measurement 374.2.3 Angle Detection 384.3 mmWave Radar Deployment 384.4 Sonar Deployment 41References 455 Localization with Real-Time Kinematic Global Navigation Satellite System 475.1 Introduction 475.2 GNSS Technology Overview 475.3 RTK GNSS 495.4 RTK-GNSS NtripCaster Setup Steps 525.4.1 Set up NtripCaster 525.4.2 Start NtripCaster 545.5 Setting Up NtripServer and NtripClient on Raspberry Pi 555.5.1 Install the Raspberry Pi System 555.5.2 Run RTKLIB-str2str on the Raspberry Pi 575.5.2.1 Running NtripServer on the Base Station Side 575.5.2.2 Running NtripClient on the GNSS Rover 585.6 Setting Up a Base Station and a GNSS Rover 595.6.1 Base Station Hardware Setup 595.6.2 Base Station Software Setup 605.6.3 GNSS Rover Setup 675.6.3.1 Rover Hardware Setup 675.6.3.2 Rover Software Setup 685.7 FreeWave Radio Basic Configuration 71References 756 Computer Vision for Perception and Localization 776.1 Introduction 776.2 Building Computer Vision Hardware 776.2.1 Seven Layers of Technologies 786.2.2 Hardware Synchronization 806.2.3 Computing 806.3 Calibration 816.3.1 Intrinsic Parameters 816.3.2 Extrinsic Parameters 826.3.3 Kalibr 826.3.3.1 Calibration Target 836.3.3.2 Multiple Camera Calibration 836.3.3.3 Camera IMU Calibration 846.3.3.4 Multi-IMU and IMU Intrinsic Calibration 846.4 Localization with Computer Vision 856.4.1 VSLAM Overview 856.4.2 ORB-SLAM2 866.4.2.1 Prerequisites 866.4.2.2 Building the ORB-SLAM2 Library 876.4.2.3 Running Stereo Datasets 876.5 Perception with Computer Vision 876.5.1 ELAS for Stereo Depth Perception 886.5.2 Mask R-CNN for Object Instance Segmentation 896.6 The DragonFly Computer Vision Module 906.6.1 DragonFly Localization Interface 906.6.2 DragonFly Perception Interface 926.6.3 DragonFly+ 93References 947 Planning and Control 977.1 Introduction 977.2 Route Planning 977.2.1 Weighted Directed Graph 987.2.2 Dijkstra's Algorithm 997.2.3 A* Algorithm 1007.3 Behavioral Planning 1007.3.1 Markov Decision Process 1017.3.2 Value Iteration Algorithm 1027.3.3 Partially Observable Markov Decision Process (POMDP) 1037.3.4 Solving POMDP 1047.4 Motion Planning 1057.4.1 Rapidly Exploring Random Tree 1057.4.2 RRT* 1067.5 Feedback Control 1077.5.1 Proportional-Integral-Derivative Controller 1087.5.2 Model Predictive Control 1087.6 Iterative EM Plannning System in Apollo 1107.6.1 Terminologies 1107.6.1.1 Path and Trajectory 1107.6.1.2 SL Coordinate System and Reference Line 1107.6.1.3 ST Graph 1117.6.2 Iterative EM Planning Algorithm 1127.6.2.1 Traffic Decider 1137.6.2.2 QP Path and QP Speed 1147.7 PerceptIn's Planning and Control Framework 116References 1188 Mapping 1198.1 Introduction 1198.2 Digital Maps 1198.2.1 Open Street Map 1208.2.1.1 OSM Data Structures 1208.2.1.2 OSM Software Stack 1218.2.2 Java OpenStreetMap Editor 1218.2.2.1 Adding a Node or a Way 1238.2.2.2 Adding Tags 1238.2.2.3 Uploading to OSM 1248.2.3 Nominatim 1248.2.3.1 Nominatim Architecture 1248.2.3.2 Place Ranking in Nominatim 1258.3 High-Definition Maps 1258.3.1 Characteristics of HD Maps 1268.3.1.1 High Precision 1268.3.1.2 Rich Geometric Information and Semantics 1268.3.1.3 Fresh Data 1268.3.2 Layers of HD Maps 1268.3.2.1 2D Orthographic Reflectivity Map 1278.3.2.2 Digital Elevation Model 1278.3.2.3 Lane/Road Model 1278.3.2.4 Stationary Map 1278.3.3 HD Map Creation 1278.3.3.1 Data Collection 1278.3.3.2 Offline Generation of HD Maps 1288.3.3.2.1 Sensor Fusion and Pose Estimation 1288.3.3.2.2 Map Data Fusion and Data Processing 1298.3.3.2.3 3D Object Location Detection 1298.3.3.2.4 Semantics/Attributes Extraction 1298.3.3.3 Quality Control and Validation 1298.3.3.4 Update and Maintenance 1298.3.3.5 Problems of HD Maps 1308.4 PerceptIn's pi-Map 1308.4.1 Topological Map 1308.4.2 pi-Map Creation 131References 1339 Building the DragonFly Pod and Bus 1359.1 Introduction 1359.2 Chassis Hardware Specifications 1359.3 Sensor Configurations 1369.4 Software Architecture 1389.5 Mechanism 1429.6 Data Structures 1449.6.1 Common Data Structures 1449.6.2 Chassis Data 1469.6.3 Localization Data 1499.6.4 Perception Data 1509.6.5 Planning Data 1539.7 User Interface 158References 16010 Enabling Commercial Autonomous Space Robotic Explorers 16110.1 Introduction 16110.2 Destination Mars 16210.3 Mars Explorer Autonomy 16310.3.1 Localization 16310.3.2 Perception 16410.3.3 Path Planning 16510.3.4 The Curiosity Rover and Mars 2020 Explorer 16510.4 Challenge: Onboard Computing Capability 16810.5 Conclusion 169References 17011 Edge Computing for Autonomous Vehicles 17111.1 Introduction 17111.2 Benchmarks 17211.3 Computing System Architectures 17311.4 Runtime 17511.5 Middleware 17711.6 Case Studies 178References 17912 Innovations on the Vehicle-to-Everything Infrastructure 18312.1 Introduction 18312.2 Evolution of V2X Technology 18312.3 Cooperative Autonomous Driving 18612.4 Challenges 188References 18913 Vehicular Edge Security 19113.1 Introduction 19113.2 Sensor Security 19113.3 Operating System Security 19213.4 Control System Security 19313.5 V2X Security 19313.6 Security for Edge Computing 194References 196Index 199
SHAOSHAN LIU, PHD, is Founder and CEO of PerceptIn, a full-stack visual intelligence company aimed at making scalable hardware/software integrated solutions for autonomous robotics systems. Liu holds a Ph.D. in Computer Engineering from University of California, Irvine and his research focuses on Edge Computing Systems, Robotics, and Autonomous Driving. Liu has over 40 publications and over 100 patents in autonomous systems. Liu is currently a Senior Member of IEEE, an ACM Distinguished Speaker, an IEEE Computer Society Distinguished Visitor, and a co-founder of the IEEE Computer Society Special Technical Community on Autonomous Driving Technologies.