• Wyszukiwanie zaawansowane
  • Kategorie
  • Kategorie BISAC
  • Książki na zamówienie
  • Promocje
  • Granty
  • Książka na prezent
  • Opinie
  • Pomoc
  • Załóż konto
  • Zaloguj się

Machine Learning: A Practical Approach on the Statistical Learning Theory » książka

zaloguj się | załóż konto
Logo Krainaksiazek.pl

koszyk

konto

szukaj
topmenu
Księgarnia internetowa
Szukaj
Książki na zamówienie
Promocje
Granty
Książka na prezent
Moje konto
Pomoc
 
 
Wyszukiwanie zaawansowane
Pusty koszyk
Bezpłatna dostawa dla zamówień powyżej 20 złBezpłatna dostawa dla zamówień powyżej 20 zł

Kategorie główne

• Nauka
 [2946350]
• Literatura piękna
 [1816154]

  więcej...
• Turystyka
 [70666]
• Informatyka
 [151172]
• Komiksy
 [35576]
• Encyklopedie
 [23172]
• Dziecięca
 [611458]
• Hobby
 [135995]
• AudioBooki
 [1726]
• Literatura faktu
 [225763]
• Muzyka CD
 [378]
• Słowniki
 [2917]
• Inne
 [444280]
• Kalendarze
 [1179]
• Podręczniki
 [166508]
• Poradniki
 [469467]
• Religia
 [507199]
• Czasopisma
 [496]
• Sport
 [61352]
• Sztuka
 [242330]
• CD, DVD, Video
 [3348]
• Technologie
 [219391]
• Zdrowie
 [98638]
• Książkowe Klimaty
 [124]
• Zabawki
 [2382]
• Puzzle, gry
 [3525]
• Literatura w języku ukraińskim
 [259]
• Art. papiernicze i szkolne
 [7107]
Kategorie szczegółowe BISAC

Machine Learning: A Practical Approach on the Statistical Learning Theory

ISBN-13: 9783030069490 / Angielski / Miękka / 2019 / 362 str.

Rodrigo F. Mello; Moacir Antonelli Ponti
Machine Learning: A Practical Approach on the Statistical Learning Theory F. Mello, Rodrigo 9783030069490 Springer - książkaWidoczna okładka, to zdjęcie poglądowe, a rzeczywista szata graficzna może różnić się od prezentowanej.

Machine Learning: A Practical Approach on the Statistical Learning Theory

ISBN-13: 9783030069490 / Angielski / Miękka / 2019 / 362 str.

Rodrigo F. Mello; Moacir Antonelli Ponti
cena 261,63
(netto: 249,17 VAT:  5%)

Najniższa cena z 30 dni: 250,57
Termin realizacji zamówienia:
ok. 16-18 dni roboczych.

Darmowa dostawa!
Kategorie:
Informatyka, Bazy danych
Kategorie BISAC:
Computers > Artificial Intelligence - General
Computers > Mathematical & Statistical Software
Mathematics > Matematyka stosowana
Wydawca:
Springer
Język:
Angielski
ISBN-13:
9783030069490
Rok wydania:
2019
Dostępne języki:
Ilość stron:
362
Waga:
0.53 kg
Wymiary:
23.39 x 15.6 x 1.98
Oprawa:
Miękka
Dodatkowe informacje:
Wydanie ilustrowane

"The book addresses the subject of machine learning, with an emphasis on statistical learning theory. ... The book can be used in ML courses as well as for independent study, since it presents very thoroughly all the fundamental theoretical insights of SLT, together with examples and implementations in R." (Catalin Stoean, zbMATH 1408.68003, 2019)

Chapter 1 – A Brief Review on Machine Learning

                1.1 Machine Learning definition

                1.2 Main types of learning

                1.3 Supervised learning

                1.4 How a supervised algorithm learns?

                1.5 Illustrating the Supervised Learning

                                1.51. The Perceptron

                                1.5.2 Multilayer Perceptron

                1.6 Concluding Remarks  

Chapter 2 - Statistical Learning Theory

                2.1 Motivation

                2.2 Basic concepts

                                2.2.1 Probability densities and joint probabilities

                                2.2.2 Identically and independently distributed data

                                2.2.3 Assumptions considered by the Statistical Learning Theory

                                2.2.4 Expected risk and generalization

                                2.2.5 Bounds for generalization with a practical example

                                2.2.6 Bayes risk and universal consistency

                                2.2.7 Consistency, overfitting and underfitting

                                2.2.8 Bias of classification algorithms

                2.3 Empirical Risk Minimization Principle

                                2.3.1 Consistency and the ERM Principle

                                2.3.2 Restriction of the space of admissible functions

                                2.3.3 Ensuring uniform convergence in practice

                2.4 Symmetrization lemma and the shattering coefficient

                                2.4.1 Shattering coefficient as a capacity measure

                                2.4.2 Making the ERM Principle consistent for infinite functions

                2.5 Generalization bounds

                2.6 The Vapnik-Chervonenkis dimension

                                2.6.1 Margin bounds

                2.7 Concluding Remarks

Chapter 3 - Assessing Learning Algorithms

                3.1 Mapping the concepts of the Statistical Learning Theory

                3.2 Using the Chernoff bound

                3.3 Using the Generalization Bound

                3.4 Using the SVM Generalization Bound

                3.5 Empirical Study of the Biases of Classification Algorithms

                3.6 Concluding Remarks

Chapter 4 - Introduction to Support Vector Machines

                4.1 Using basic Algebra to build a classification algorithm

                4.2 Hyperplane-based classification: an intuitive view

                4.3 Hyperplane-based classification: an algebraic view

                                4.3.1 Lagrange multipliers

                                4.3.2 Karush-Kuhn-Tucker conditions

                4.4 Formulating the hard-margin SVM optimization problem

                4.5 Formulating the soft-margin SVM optimization problem

                4.6 Concluding Remarks

Chapter 5 - In Search for the Optimization Algorithm

                5.1 What is an optimization problem?

                5.2 Main types of optimization problems

                5.3 Linear optimization problems

                                5.3.1 Solving through graphing

                                5.3.2 Primal and dual forms of linear problems

                                                5.3.2.1 Using the table and rules

                                                5.3.2.2 Graphical interpretation of primal and dual forms

                                                5.3.2.3 Using Lagrange multipliers

                                5.3.3 Using an algorithmic approach to solve linear problems

                                5.3.4 On the KKT conditions for linear problems

                                                5.3.4.1 Applying the rules

                                                5.3.4.2 Graphical interpretation of the KKT conditions

                5.4 Convex optimization problems

                                5.4.1 Interior Point Methods

                                5.4.2 The Primal-Dual Path Following Interior Point Method

                                5.4.3 Implementing the Interior Point Method to solve our first optimization problem

                                5.4.4 Implementing the Interior Point Method to solve the SVM optimization problem

                                5.4.5 Solving the SVM optimization problem using package LowRankQP

                5.5 Concluding Remarks

Chapter 6 - A Brief Introduction on Kernels

                6.1 Definitions, typical kernels and examples

                                6.1.1 The Polynomial kernel

                                6.1.2 The Radial Basis Function kernel

                                6.1.3 The Sigmoidal Kernel

                                6.1.4 Practical examples with kernels

                6.2 Linear Algebra

                                6.2.1 Basis

                                6.2.2 Linear transformation

                                6.2.3 Inverses of Linear Transformations

                                6.2.4 Dot products

                                6.2.5 Change of basis and orthonormal basis

                                6.2.6 Eigenvalues and Eigenvectors

                6.3 Principal Component Analysis (PCA)

                6.4 Kernel Principal Component Analysis (KPCA)

                6.5 Assessing Kernels using Practical Problems

                6.6 SVM Kernel trick

                6.7 A quick note on the Mercer's theorem

                6.8 Conclusions

 

riggerRodrigo Fernandes de Mello is Associate Professor with the Department of Computer Science, at the Institute of Mathematics and Computer Sciences, University of São Paulo, São Carlos, SP, Brazil. He obtained his PhD degree from the University of São Paulo. His research interests include the Statistical Learning Theory, Machine Learning, Data Streams, and Applications in Dynamical Systems concepts. He has published more than 100 papers including journals and conferences, supported and organized international conferences, besides serving as Editor of International Journals.

Moacir Antonelli Ponti is Associate Professor with the Department of Computer Science, at the Institute of Mathematics and Computer Sciences, University of São Paulo, São Carlos, Brazil, and was visiting researcher at the Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey. He obtained his PhD from the Federal University of São Carlos. His research interests include Pattern Recognition and Computer Vision, as well as Signal, Image and Video Processing.

This book presents the Statistical Learning Theory in a detailed and easy to understand way, by using practical examples, algorithms and source codes. It can be used as a textbook in graduation or undergraduation courses, for self-learners, or as reference with respect to the main theoretical concepts of Machine Learning. Fundamental concepts of Linear Algebra and Optimization applied to Machine Learning are provided, as well as source codes in R, making the book as self-contained as possible.

It starts with an introduction to Machine Learning concepts and algorithms such as the Perceptron, Multilayer Perceptron and the Distance-Weighted Nearest Neighbors with examples, in order to provide the necessary foundation so the reader is able to understand the Bias-Variance Dilemma, which is the central point of the Statistical Learning Theory.

Afterwards, we introduce all assumptions and formalize the Statistical Learning Theory, allowing the practical study of different classification algorithms. Then, we proceed with concentration inequalities until arriving to the Generalization and the Large-Margin bounds, providing the main motivations for the Support Vector Machines.

From that, we introduce all necessary optimization concepts related to the implementation of Support Vector Machines. To provide a next stage of development, the book finishes with a discussion on SVM kernels as a way and motivation to study data spaces and improve classification results.



Udostępnij

Facebook - konto krainaksiazek.pl



Opinie o Krainaksiazek.pl na Opineo.pl

Partner Mybenefit

Krainaksiazek.pl w programie rzetelna firma Krainaksiaze.pl - płatności przez paypal

Czytaj nas na:

Facebook - krainaksiazek.pl
  • książki na zamówienie
  • granty
  • książka na prezent
  • kontakt
  • pomoc
  • opinie
  • regulamin
  • polityka prywatności

Zobacz:

  • Księgarnia czeska

  • Wydawnictwo Książkowe Klimaty

1997-2026 DolnySlask.com Agencja Internetowa

© 1997-2022 krainaksiazek.pl
     
KONTAKT | REGULAMIN | POLITYKA PRYWATNOŚCI | USTAWIENIA PRYWATNOŚCI
Zobacz: Księgarnia Czeska | Wydawnictwo Książkowe Klimaty | Mapa strony | Lista autorów
KrainaKsiazek.PL - Księgarnia Internetowa
Polityka prywatnosci - link
Krainaksiazek.pl - płatnośc Przelewy24
Przechowalnia Przechowalnia