Timothy Masters received a PhD in mathematical statistics with a specialization in numerical computing. Since then he has continuously worked as an independent consultant for government and industry. His early research involved automated feature detection in high-altitude photographs while he developed applications for flood and drought prediction, detection of hidden missile silos, and identification of threatening military vehicles. Later he worked with medical researchers in the development of computer algorithms for distinguishing between benign and malignant cells in needle biopsies. For the last twenty years he has focused primarily on methods for evaluating automated financial market trading systems. He has authored five books on practical applications of predictive modeling: Practical Neural Network Recipes in C++ (Academic Press, 1993) Signal and Image Processing with Neural Networks (Wiley, 1994) Advanced Algorithms for Neural Networks (Wiley, 1995) Neural, Novel, and Hybrid Algorithms for Time Series Prediction (Wiley, 1995) Assessing and Improving Prediction and Classification (CreateSpace, 2013) Deep Belief Nets in C++ and CUDA C: Volume I: Restricted Boltzmann Machines and Supervised Feedforward Networks (CreateSpace, 2015).
Discover the essential building blocks of a common and powerful form of deep belief net: the autoencoder. You’ll take this topic beyond current usage by extending it to the complex domain for signal and image processing applications. Deep Belief Nets in C++ and CUDA C: Volume 2 also covers several algorithms for preprocessing time series and image data. These algorithms focus on the creation of complex-domain predictors that are suitable for input to a complex-domain autoencoder. Finally, you’ll learn a method for embedding class information in the input layer of a restricted Boltzmann machine. This facilitates generative display of samples from individual classes rather than the entire data distribution. The ability to see the features that the model has learned for each class separately can be invaluable.
At each step this book provides you with intuitive motivation, a summary of the most important equations relevant to the topic, and highly commented code for threaded computation on modern CPUs as well as massive parallel processing on computers with CUDA-capable video display cards.
You will:
• Code for deep learning, neural networks, and AI using C++ and CUDA C
• Carry out signal preprocessing using simple transformations, Fourier transforms, Morlet wavelets, and more
• Use the Fourier Transform for image preprocessing
• Implement autoencoding via activation in the complex domain
• Work with algorithms for CUDA gradient computation