Machine learning (ML) has become a commonplace element in our everyday lives and a standard tool for many fields of science and engineering. To make optimal use of ML, it is essential to understand its underlying principles.
This book approaches ML as the computational implementation of the scientific principle. This principle consists of continuously adapting a model of a given data-generating phenomenon by minimizing some form of loss incurred by its predictions.
The book trains readers to break down various ML applications and methods in terms of data, model, and loss, thus helping them to choose from the vast range of ready-made ML methods.
The book’s three-component approach to ML provides uniform coverage of a wide range of concepts and techniques. As a case in point, techniques for regularization, privacy-preservation as well as explainability amount to specific design choices for the model, data, and loss of a ML method.
Introduction.- Components of ML.- The Landscape of ML.- Empirical Risk Minimization.- Gradient-Based Learning.- Model Validation and Selection.- Regularization.- Clustering.- Feature Learning.- Transparant and Explainable ML.
Alexander Jung is Assistant Professor of Machine Learning at the Department of Computer Science, Aalto University where he leads the research group "Machine Learning for Big Data". His courses on machine learning, artificial intelligence, and convex optimization are among the most popular courses offered at Aalto University. He received a Best Student Paper Award at the premium signal processing conference IEEE ICASSP in 2011, an Amazon Web Services Machine Learning Award in 2018, and was elected as Teacher of the Year by the Department of Computer Science in 2018. He serves as an Associate Editor for the IEEE Signal Processing Letters.
Machine learning (ML) has become a commonplace element in our everyday lives and a standard tool for many fields of science and engineering. To make optimal use of ML, it is essential to understand its underlying principles.
This book approaches ML as the computational implementation of the scientific principle. This principle consists of continuously adapting a model of a given data-generating phenomenon by minimizing some form of loss incurred by its predictions.
The book trains readers to break down various ML applications and methods in terms of data, model, and loss, thus helping them to choose from the vast range of ready-made ML methods.
The book’s three-component approach to ML provides uniform coverage of a wide range of concepts and techniques. As a case in point, techniques for regularization, privacy-preservation as well as explainability amount to specific design choices for the model, data, and loss of a ML method.