Chapter Goal: To give a structured yet deep overview of Keras and to lay the groundwork for implementations in future chapters.
Number of Pages: ~30
Subtopics
1. Why Keras? Versatility and simplicity.
2. Steps needed to create a Keras model: define architecture, compile, fit.
a. Compile: discuss TensorFlow optimizers, losses, and metrics.
b. Fit: discuss callbacks.
3. Sequential model + example.
4. Functional model + example.
5. Visualizing Keras models.
6. Data: using NumPy arrays, Keras Image Data Generator, and TensorFlow datasets.
7. Hardware: using and accessing CPU, GPU, and TPU.
Chapter 2: Pre-training Strategies and Transfer Learning
Chapter Goal: To understand the importance of transfer learning and to use a variety of transfer learning methods to solve deep learning problems efficiently.
Number of Pages: ~30
Subtopics
1. Transfer learning theory, practical tips and tricks.
2. Accessing and using Keras and TensorFlow pretrained models.
a. Bonus: converting PyTorch models (PyTorch has a wider variety) into Keras models for greater access to pretrained networks.
3. Manipulating pretrained models with other network elements.
4. Layer freezing.
5. Self-supervised learning methods.
Chapter 3: “The Versatility of Autoencoders”
Chapter Goal: To understand the versatility of autoencoders and to be able to use them in a wide variety of problem scenarios.
Number of Pages: ~30
Subtopics
1. Autoencoder theory.
2. One-dimensional data autoencoder implementation, tips and tricks.
3. Convolutional autoencoder implementation, tips and tricks, special concerns.
4. Using autoencoders for pretraining.
a. Example case study: TabNet.
5. Using autoencoders for feature reduction.
6. Variational autoencoders for data generation.
Chapter 4: “Model Compression for Practical Deployment”
Chapter Goal: To understand pruning theory, implement pruning for effective model compression, and to recognize the important role of pruning in modern deep learning research.
Number of Pages: ~20
Subtopics
1. Pruning theory.
2. Pruning Keras models with TensorFlow.
3. Exciting implications of pruning – the Lottery Ticket Hypothesis.
a. Example case-study: no-training neural networks.
b. Example case-study: extreme learning machines.
Chapter 5: “Automating Model Design with Meta-Optimization”
Chapter Goal: To understand what meta-optimization is and to be able to use it to effectively automate the design of neural networks.
Number of Pages: ~20
Subtopics
1. Meta-optimization theory.
2. Demonstration of meta-optimization using HyperOpt on Keras.
3. Demonstration of Auto-ML and Neural Architecture Search.
Chapter Goal: To gain an understanding of principles in successful neural network architecture design through three case studies.
Number of Pages: ~25
Subtopics
1. Diversity of neural network designs and the need to design specific architectures for particular problems.
2. Theory and implementation of block/cell/module design and considerations.
a. Example case study: Inception model.
3. Theory and implementation of “Normal” and “extreme” usages of skip connections.
a. Parallel towers and cardinality
b. Example case study: UMAP model.
4. Neural network scaling.
a. Example case study: EfficientNet.
Chapter 7: “Reframing Difficult Deep Learning Problems”
Chapter Goal: To explore how hard problems can be reframed to be solved by deep learning with three case studies.
Number of Pages: ~30
Subtopics
1. The diversity of problems deep learning is being used to solve.
2. Example case study: Siamese networks – experimenting with architecture.
3. Example case study: DeepInsight – experimenting with data representation.
4. Example case study: Semi-supervised generative adversarial networks – experimenting with data availability.
Andre Ye is a data science writer and editor; he has written over 300 data science articles for various top data science publications with over ten million views. He is also a cofounder at Critiq, a peer revision platform that uses machine learning to match users’ essays. In his spare time, Andre enjoys keeping up with current deep learning research, playing the piano, and swimming.
Learn how to harness modern deep-learning methods in many contexts. Packed with intuitive theory, practical implementation methods, and deep-learning case studies, this book reveals how to acquire the tools you need to design and implement like a deep-learning architect. It covers tools deep learning engineers can use in a wide range of fields, from biology to computer vision to business. With nine in-depth case studies, this book will ground you in creative, real-world deep learning thinking.
You’ll begin with a structured guide to using Keras, with helpful tips and best practices for making the most of the framework. Next, you’ll learn how to train models effectively with transfer learning and self-supervised pre-training. You will then learn how to use a variety of model compressions for practical usage. Lastly, you will learn how to design successful neural network architectures and creatively reframe difficult problems into solvable ones. You’ll learn not only to understand and apply methods successfully but to think critically about it.
Modern Deep Learning Design and Methods is ideal for readers looking to utilize modern, flexible, and creative deep-learning design and methods. Get ready to design and implement innovative deep-learning solutions to today’s difficult problems.
You will:
Improve the performance of deep learning models by using pre-trained models, extracting rich features, and automating optimization.
Compress deep learning models while maintaining performance.
Reframe a wide variety of difficult problems and design effective deep learning solutions to solve them.
Use the Keras framework, with some help from libraries like HyperOpt, TensorFlow, and PyTorch, to implement a wide variety of deep learning approaches.