'… an excellent discussion of representative algorithms as used in data science today - one of the best in-depth resources to appear in recent years for a scientist working on new analytic approaches or optimization … Highly recommended.' J. Brzezinski, Choice
Part I. Machine Learning: 1. Rudiments of Statistical Learning; 2. Vapnik–Chervonenkis Dimension; 3. Learnability for Binary Classification; 4. Support Vector Machines; 5. Reproducing Kernel Hilbert; 6. Regression and Regularization; 7. Clustering; 8. Dimension Reduction; Part II Optimal Recovery: 9. Foundational Results of Optimal Recovery; 10. Approximability Models; 11. Ideal Selection of Observation Schemes; 12. Curse of Dimensionality; 13. Quasi-Monte Carlo Integration; Part III Compressive Sensing: 14. Sparse Recovery from Linear Observations; 15. The Complexity of Sparse Recovery; 16. Low-Rank Recovery from Linear Observations; 17. Sparse Recovery from One-Bit Observations; 18. Group Testing; Part IV Optimization: 19. Basic Convex Optimization; 20. Snippets of Linear Programming; 21. Duality Theory and Practice; 22. Semidefinite Programming in Action; 23. Instances of Nonconvex Optimization; Part V Neural Networks: 24. First Encounter with ReLU Networks; 25. Expressiveness of Shallow Networks; 26. Various Advantages of Depth; 27. Tidbits on Neural Network Training; Appendix A; High-Dimensional Geometry; Appendix B. Probability Theory; Appendix C. Functional Analysis; Appendix D. Matrix Analysis; Appendix E. Approximation Theory.