Reader learns about Big data and its usefulness. Also how Hadoop and its ecosystem is beautifully able to process big data for useful informations. What are the shortcomings of Hadoop which requires another Big data processing platform.
No of pages 15-20
Sub -Topics
1. Introduction to Big-Data
2. Big Data challenges and processing technology
3. Hadoop, structure and its ecosystem
4. Shortcomings of Hadoop
Chapter 2: Python, NumPy and SciPy
Chapter Goal:
The goal of this chapter to get reader acquainted with Python, NumPy and SciPy.
No of pages: 25-30
Sub - Topics
1. Introduction to Python
2. Python collection, String Function and Class
3. NumPy and ndarray
4. SciPy
Cha
pter 3: Spark : Introduction, Installation, Structure and PySpark
Chapter Goal:
This chapters will introduce Spark, Installation on Single machine. There after it continues with structure of Spark. Finally, PySpark is introduced.
No of pages : 15-20
Sub - Topics:
1. Introduction to Spark
2. Spark installation on Ubuntu
3. Spark architecture
4. PySpark and Its architecture
Chapter 4: Resilient Distributed Dataset (RDD)
Chapter Goal:
Chapter deals with the core of Spark, RDD. Operation on RDD
No of pages: 25-30
Sub - Topics:
1. Introduction to RDD and its characteristics
2. Transformation and Actions
2. Operations on RDD ( like map, filter, set operations and many more)
Chapter 5: The power of pairs : Paired RDD
Chapter Goal:
Paired RDD can
help in making many complex computation easy in programming. Learners will learn paired RDD and operation on this.
No of pages:15 -20
Sub - Topics:
1. Introduction to Paired RDD
2. Operation on paired RDD (mapByKey, reduceByKey …...)
Chapter 6: Advance PySpark and PySpark application optimization
Chapter Goal: 30-35
Reader will learn about Advance PySpark topics broadcast and accumulator. In this chapter learner will learn about PySpark application optimization.
No of pages:
Sub - Topics:
1. Spark Accumulator
2. Spark Broadcast
3. Spark Code Optimization
Chapter 7: IO in PySpark
Chapter Goal:
We will learn PySpark IO in this chapter. Reading and writing .csv file and .json files. We will also learn how to connect to different databases with PySpark.
No of pages:20-30
Su
b - Topics:
1. Reading and writing JSON and .csv files
2. Reading data from HDFS
3. Reading data from different databases and writing data to different databases
Chapter 8: PySpark Streaming
Chapter Goal:
Reader will understand real time data analysis with PySpark Streaming. This chapter is focus on PySpark Streaming architecture, Discretized stream operations and windowing operations.
No of pages:30-40
Sub - Topics:
1. PySpark Streaming architecture
2. Discretized Stream and operations
3. Concept of windowing operations
Chapter 9: SparkSQL
Chapter Goal:
In this chapter reader will learn about SparkSQL. SparkSQL Dataframe is introduced in this chapter. In this chapter learner will learn how to use SQL commands using SparkSQL
No of pages: 40-50
Sub - Topics:
1. SparkSQL
2. SQL with SparkSQL
3. Hive commands with SparkSQL
Raju Mishra has strong interests in data science and systems that have the capability of handling large amounts of data and operating complex mathematical models through computational programming. He was inspired to pursue an M. Tech in computational sciences from Indian Institute of Science in Bangalore, India. Raju primarily works in the areas of data science and its different applications. Working as a corporate trainer he has developed unique insights that help him in teaching and explaining complex ideas with ease. Raju is also a data science consultant solving complex industrial problems. He works on programming tools such as R, Python, scikit-learn, Statsmodels, Hadoop, Hive, Pig, Spark, and many others.
Quickly find solutions to common programming problems encountered while processing big data. Content is presented in the popular problem-solution format. Look up the programming problem that you want to solve. Read the solution. Apply the solution directly in your own code. Problem solved!
PySpark Recipes covers Hadoop and its shortcomings. The architecture of Spark, PySpark, and RDD are presented. You will learn to apply RDD to solve day-to-day big data problems. Python and NumPy are included and make it easy for new learners of PySpark to understand and adopt the model.
What You Will Learn:
Understand the advanced features of PySpark and SparkSQL