● What is Machine Learning? ● Types of Machine Learning a. Supervised Learning: Regression, Classification (Binary or Multiclass)
b. Unsupervised Learning c. Semi-Supervised Learning d. Reinforcement Learning ● Machine Learning Terms: a. Data and Datasets: Train, Test, and Validation
b. Cross-Validation
c. Overfitting
d. Bias & Variance, e. Fine-Tuning f. Performance Terms: Accuracy, Recall, Precision, F1 Score, Confusion Matrix ● Introduction to and Comparison of ML Models: a. Regression (Linear and Logistic), Decision Trees, K-Nearest Neighbors, Support Vector Machines, K-Means Clustering, Principal Component Analysis ● Steps of Machine Learning: Data Cleaning, Model Building, Dataset Split: Training, Testing, and Validation, and Performance Evaluation
Chapter 3: Deep Learning
● Introduction to Deep Learning ● Introduction to Perceptron ● Activation Functions ● Cost (Loss) Function ● Gradient Descent Backpropagation ● Normalization and Standardization ● Loss Function and Optimization Functions ● Optimizer
Chapter 4: Relevant Technologies Used for Machine Learning
● Tensorflow vs. Other Deep Learning Libraries ● Keras API vs. Estimator ● Keras API Syntax ● Hardware Options and Performance Evaluation: CPUs vs. GPUs vs. TPUs
Chapter 6: Artificial Neural Networks (ANNs)
● Introduction to ANNs ● Perceptron Model ● Linear (Shallow) Neural Networks ● Deep Neural Networks ● ANN Application Example with TF 2.0 Keras API
Chapter 7: Convolutional Neural Networks (CNNs)
● Introduction to CNN Architecture ● CNN Basics: Strides and Filtering ● Dealing with Image Data ● Batch Normalization ● Data Augmentation ● CNN for Fashion MNIST with TF 2.0 Keras API ● CNN for CIFAR10 with TF 2.0 Keras API (Pre-Trained Model) ● CNN with Imagenet with TF 2.0 Keras API (Pre-Trained Model)
Chapter 8: Recurrent Neural Networks (RNNs)
● Introduction to RNN Architectures ● Sequence Data (incl. Time Series) ● Data Preparation ● Simple RNN Architecture
● Gated Recurrent Unit (GRU) Architecture
● Long-Short Term Memory (LSTM) Architecture
● Simple RNN, GRU, and LSTM Comparison
Chapter 9: Natural Language Processing (RNN and CNN applications)
● Introduction to Natural Language Processing ● Text Processing ● NLP Application with RNN ● NLP Application with CNN ● Text Generation
Chapter 10: Recommender Systems
● Introduction to Recommender Systems ● Recommender System Using MovieLens Dataset ● Recommender System Using Jester Dataset
Chapter 11: Auto-Encoders
● Introduction to Auto-Encoders ● Dimensionality Reduction ● Noise Removal ● Auto-Encoder for Images
● Introduction to Generative Adversarial Networks ● Generator and Discriminator Structures
● Image Generation with GANs
● Text Generation with GANs
Chapter 13: Conclusion
Orhan Gazi Yalçın is a joint Ph.D. candidate at the University of Bologna & the Polytechnic University of Madrid. After completing his double major in business and law, he began his career in Istanbul, working for a city law firm, Allen & Overy, and a global entrepreneurship network, Endeavor. During his academic and professional career, he taught himself programming and excelled in machine learning. He currently conducts research on hotly debated law & AI topics such as explainable artificial intelligence and the right to explanation by combining his technical and legal skills. In his spare time, he enjoys free-diving, swimming, exercising as well as discovering new countries, cultures, and cuisines.
Implement deep learning applications using TensorFlow while learning the “why” through in-depth conceptual explanations.
You’ll start by learning what deep learning offers over other machine learning models. Then familiarize yourself with several technologies used to create deep learning models. While some of these technologies are complementary, such as Pandas, Scikit-Learn, and Numpy—others are competitors, such as PyTorch, Caffe, and Theano. This book clarifies the positions of deep learning and Tensorflow among their peers.
You'll then work on supervised deep learning models to gain applied experience with the technology. A single-layer of multiple perceptrons will be used to build a shallow neural network before turning it into a deep neural network. After showing the structure of the ANNs, a real-life application will be created with Tensorflow 2.0 Keras API. Next, you’ll work on data augmentation and batch normalization methods. Then, the Fashion MNIST dataset will be used to train a CNN. CIFAR10 and Imagenet pre-trained models will be loaded to create already advanced CNNs.
Finally, move into theoretical applications and unsupervised learning with auto-encoders and reinforcement learning with tf-agent models. With this book, you’ll delve into applied deep learning practical functions and build a wealth of knowledge about how to use TensorFlow effectively.
You will:
Compare competing technologies and see why TensorFlow is more popular
Generate text, image, or sound with GANs
Predict the rating or preference a user will give to an item