Federated Learning: A Comprehensive Overview of Methods and Applications presents an in-depth discussion of the most important issues and approaches to federated learning for researchers and practitioners.
Federated Learning (FL) is an approach to machine learning in which the training data are not managed centrally. Data are retained by data parties that participate in the FL process and are not shared with any other entity. This makes FL an increasingly popular solution for machine learning tasks for which bringing data together in a centralized repository is problematic, either for privacy, regulatory or practical reasons.
This book explains recent progress in research and the state-of-the-art development of Federated Learning (FL), from the initial conception of the field to first applications and commercial use. To obtain this broad and deep overview, leading researchers address the different perspectives of federated learning: the core machine learning perspective, privacy and security, distributed systems, and specific application domains. Readers learn about the challenges faced in each of these areas, how they are interconnected, and how they are solved by state-of-the-art methods.
Following an overview on federated learning basics in the introduction, over the following 24 chapters, the reader will dive deeply into various topics. A first part addresses algorithmic questions of solving different machine learning tasks in a federated way, how to train efficiently, at scale, and fairly. Another part focuses on providing clarity on how to select privacy and security solutions in a way that can be tailored to specific use cases, while yet another considers the pragmatics of the systems where the federated learning process will run. The book also covers other important use cases for federated learning such as split learning and vertical federated learning. Finally, the book includes some chapters focusing on applying FL in real-world enterprise settings.
Introduction to Federated Learning.- Tree-Based Models for Federated Learning Systems.- Semantic Vectorization: Text and Graph-Based Models.- Personalization in Federated Learning.- Personalized, Robust Federated Learning with Fed+.- Communication-Efficient Distributed Optimization Algorithms.- Communication-Efficient Model Fusion.- Federated Learning and Fairness.- Introduction to Federated Learning Systems.- Local Training and Scalability of Federated Learning Systems.- Straggler Management.- Systems Bias in Federated Learning.- Protecting Against Data Leakage in Federated Learning: What Approach Should You Choose?.- Private Parameter Aggregation for Federated Learning.- Data Leakage in Federated Learning.- Security and Robustness in Federated Machine Learning.- Dealing with Byzantine Threats to Neural Networks.- Privacy-Preserving Vertical Federated Learning.- Split Learning: A Resource Efficient Model & Data Parallel Approach for Distributed Deep Learning.- Federated Learning for Collaborative Financial Crimes Detection.- Federated Reinforcement Learning for Portfolio Management.- Application of Federated Learning in Medical Imaging.- Advancing Healthcare Solutions with Federated Learning.- A Privacy-preserving Product Recommender System.- Application of Federated Learning in Telecommunications and Edge Computing.
Heiko Ludwig is a Senior Manager, AI Platforms and a Principal Research Staff Member at IBM’s Almaden Research Center in San Jose, CA. Heiko coordinates the Federated Learning program at IBM Research and oversees the Distributed AI research area. His research contributed to different products, including IBM’s machine learning products. He is an ACM Distinguished Engineer and has more than 150 publications with more than 8000 citations. His technical work led to a number of technical awards by IBM and his numerous patents and patent applications received a designation as an IBM Master Inventor. Heiko is a co-editor in chief of the International Journal of Cooperative Information Systems and serves on the editorial boards of multiple journals. Heiko also serves regularly as program committee chair in conferences in the field. Heiko's wider interest is on large scale and cross-organizational AI systems and its related distributed systems, security and privacy research issues. Heiko received a doctorate in information systems from Otto-Friedrich-Universität Bamberg, Germany.
Nathalie Baracaldo leads the AI Security and Privacy Solutions team and is a Research Staff Member at IBM’s Almaden Research Center in San Jose, CA. Nathalie is passionate about delivering machine learning solutions that are highly accurate, withstand adversarial attacks and protect data privacy. Nathalie has led her team to the design of IBM Federated Learning framework which is now part of the Watson Machine Learning product and continues to work on its expansion. In 2020, Nathalie received the IBM Master Inventor distinction for her contributions to the IBM Intellectual Property and innovation. Nathalie also received the 2021 Corporate Technical Recognition, one of the highest recognitions provided to IBMers for breakthrough technical achievements that have led to notable market and industry success for IBM. This recognition was awarded for Nathalie's contribution to the Trusted AI Initiative. Nathalie has been invited to give multiple talks on federated learning, its challenges and opportunities. Nathalie has received four best paper awards and published in top-tier conferences and journals, obtaining more than 1300 Google scholar citations. Nathalie’s wider research interests include security and privacy, distributed systems and machine learning. Nathalie is also Associate Editor of the IEEE Transactions on Service Computing. Nathalie received her Ph.D. degree from the University of Pittsburgh in 2016.
Federated Learning: A Comprehensive Overview of Methods and Applications presents an in-depth discussion of the most important issues and approaches to federated learning for researchers and practitioners.
Federated Learning (FL) is an approach to machine learning in which the training data are not managed centrally. Data are retained by data parties that participate in the FL process and are not shared with any other entity. This makes FL an increasingly popular solution for machine learning tasks for which bringing data together in a centralized repository is problematic, either for privacy, regulatory or practical reasons.
This book explains recent progress in research and the state-of-the-art development of Federated Learning (FL), from the initial conception of the field to first applications and commercial use. To obtain this broad and deep overview, leading researchers address the different perspectives of federated learning: the core machine learning perspective, privacy and security, distributed systems, and specific application domains. Readers learn about the challenges faced in each of these areas, how they are interconnected, and how they are solved by state-of-the-art methods.
Following an overview on federated learning basics in the introduction, over the following 24 chapters, the reader will dive deeply into various topics. The first part addresses algorithmic questions of solving different machine learning tasks in a federated way and how to train efficiently, at scale, and fairly. Another part focuses on providing clarity on how to select privacy and security solutions in a way that can be tailored to specific use cases, while another considers the pragmatics of the systems where the federated learning process will run. The book also covers other important use cases for federated learning, such as split learning and vertical federated learning. Finally, the book includes some chapters focusing on applying FL in real-world enterprise settings.