Chapter Goal: Introduce readers to the PySpark environment, walk them through steps to setup the environment and execute some basic operations
Number of pages: 20
Subtopics:
1. Setting up your environment & data
2. Basic operations
Chapter 2: Basic Statistics and Visualizations
Chapter Goal: Introduce readers to predictive model building framework and help them acclimate with basic data operations
Number of pages: 30
Subtopics:
1. Basic Statistics
2. data manipulations/feature engineering
3. Data visualizations
4. Model building framework
Chapter 3: Variable Selection
Chapter Goal: Illustrate the different variable selection techniques to identify the top variables in a dataset and how they can be implemented using PySpark pipelines
Number of pages: 40
Subtopics:
1. Principal Component Analysis
2. Weight of Evidence & Information Value
3. Chi square selector
4. Singular Value Decomposition
5. Voting based approach
Chapter 4: Introduction to different supervised machine algorithms, implementations & Fine-tuning techniques
Chapter Goal: Explain and demonstrate supervised machine learning techniques and help the readers to understand the challenges, nuances of model fitting with multiple evaluation metrics
Number of pages: 40
Subtopics:
1. Supervised:
· Linear regression
· Logistic regression
· Decision Trees
· Random Forests
· Gradient Boosting
· Neural Nets
· Support Vector Machine
· One Vs Rest Classifier
· Naive Bayes
2. Model hyperparameter tuning:
· L1 & L2 regularization
· Elastic net
Chapter 5: Model Validation and selecting the best model
Chapter Goal: Illustrate the different techniques used to validate models, demonstrate which technique should be used for a particular model selection task and finally pick the best model out of the candidate models
Number of pages: 30
Subtopics:
1. Model Validation Statistics:
· ROC
· Accuracy
· Precision
· Recall
· F1 Score
· Misclassification
· KS
· Decile
· Lift & Gain
· R square
· Adjusted R square
· Mean squared error
Chapter 6: Unsupervised and recommendation algorithms
Chapter Goal: The readers explore a different set of algorithms – Unsupervised and recommendation algorithms and the use case of when to apply them
Number of pages: 30
Subtopics:
1. Unsupervised:
· K-Means
· Latent Dirichlet Allocation
2. Collaborative filtering using Alternating least squares
Chapter 7: End to end modeling pipelines
Chapter Goal: Exemplify building the automated model framework and introduce reader to a end to end model building pipeline including experimentation and model tracking
Number of pages: 40
Subtopics:
1. ML Flow
Chapter 8: Productionalizing a machine learning model
Chapter Goal: Demonstrate multiple model deployment techniques that can fit and serve variety of real-world use cases
Number of pages: 60
Subtopics:
1. Model Deployment using hdfs object
2. Model Deployment using Docker
3. Creating a simple Flask API
Chapter 9: Experimentations
Chapter Goal: The purpose of this chapter is to introduce hypothesis testing and use cases, optimizations for experiment-based data science applications
Number of pages: 40
Subtopics:
1. Hypothesis testing
2. Sampling techniques
Chapter 10: Other Tips: Optional
Chapter Goal: This bonus chapter is optional and will offer reader some handy tips and tricks of the trade
Number of pages: 20
Subtopics:
1. Tips on when to switch between python and PySpark
2. Graph networks
Ramcharan Kakarla is currently lead data scientist at Comcast residing in Philadelphia. He is a passionate data science and artificial intelligence advocate with five+ years of experience. He holds a master’s degree from Oklahoma State University with specialization in data mining. Prior to OSU, he received his bachelor’s in electrical and electronics engineering from Sastra University in India. He was born and raised in the coastal town of Kakinada, India. He started his career working as a performance engineer with several Fortune 500 clients including State Farm and British Airways. In his current role he is focused on building data science solutions and frameworks leveraging big data. He has published several papers and posters in the field of predictive analytics. He served as SAS Global Ambassador for the year 2015.
Sundar Krishnan is passionate about artificial intelligence and data science with more than five years of industrial experience. He has tremendous experience in building and deploying customer analytics models and designing machine learning workflow automation. Currently, he is associated with Comcast as a lead data scientist. Sundar was born and raised in Tamil Nadu, India and has a bachelor's degree from Government College of Technology, Coimbatore. He completed his master's at Oklahoma State University, Stillwater. In his spare time, he blogs about his data science works on Medium.
Discover the capabilities of PySpark and its application in the realm of data science. This comprehensive guide with hand-picked examples of daily use cases will walk you through the end-to-end predictive model-building cycle with the latest techniques and tricks of the trade.
Applied Data Science Using PySpark is divided unto six sections which walk you through the book. In section 1, you start with the basics of PySpark focusing on data manipulation. We make you comfortable with the language and then build upon it to introduce you to the mathematical functions available off the shelf. In section 2, you will dive into the art of variable selection where we demonstrate various selection techniques available in PySpark. In section 3, we take you on a journey through machine learning algorithms, implementations, and fine-tuning techniques. We will also talk about different validation metrics and how to use them for picking the best models. Sections 4 and 5 go through machine learning pipelines and various methods available to operationalize the model and serve it through Docker/an API. In the final section, you will cover reusable objects for easy experimentation and learn some tricks that can help you optimize your programs and machine learning pipelines.
By the end of this book, you will have seen the flexibility and advantages of PySpark in data science applications. This book is recommended to those who want to unleash the power of parallel computing by simultaneously working with big datasets.