Haiqin Yang, Irwin King (The Chinese University of Hong Kong, Shatin), Michael R Lyu
Regularization is a dominant theme in machine learning and statistics due to its prominent ability in providing an intuitive and principled tool for learning from high-dimensional data. As large-scale learning applications become popular, developing efficient algorithms and parsimonious models become promising and necessary for these applications. Aiming at solving large-scale learning problems, this book tackles the key research problems ranging from feature selection to learning with mixed unlabeled data and learning data similarity representation. More specifically, we focus on the...
Regularization is a dominant theme in machine learning and statistics due to its prominent ability in providing an intuitive and principled tool for l...