• Wyszukiwanie zaawansowane
  • Kategorie
  • Kategorie BISAC
  • Książki na zamówienie
  • Promocje
  • Granty
  • Książka na prezent
  • Opinie
  • Pomoc
  • Załóż konto
  • Zaloguj się

Hands-on Guide to Apache Spark 3: Build Scalable Computing Engine for Batch and Stream Data Processing » książka

zaloguj się | załóż konto
Logo Krainaksiazek.pl

koszyk

konto

szukaj
topmenu
Księgarnia internetowa
Szukaj
Książki na zamówienie
Promocje
Granty
Książka na prezent
Moje konto
Pomoc
 
 
Wyszukiwanie zaawansowane
Pusty koszyk
Bezpłatna dostawa dla zamówień powyżej 20 złBezpłatna dostawa dla zamówień powyżej 20 zł

Kategorie główne

• Nauka
 [2946912]
• Literatura piękna
 [1852311]

  więcej...
• Turystyka
 [71421]
• Informatyka
 [150889]
• Komiksy
 [35717]
• Encyklopedie
 [23177]
• Dziecięca
 [617324]
• Hobby
 [138808]
• AudioBooki
 [1671]
• Literatura faktu
 [228371]
• Muzyka CD
 [400]
• Słowniki
 [2841]
• Inne
 [445428]
• Kalendarze
 [1545]
• Podręczniki
 [166819]
• Poradniki
 [480180]
• Religia
 [510412]
• Czasopisma
 [525]
• Sport
 [61271]
• Sztuka
 [242929]
• CD, DVD, Video
 [3371]
• Technologie
 [219258]
• Zdrowie
 [100961]
• Książkowe Klimaty
 [124]
• Zabawki
 [2341]
• Puzzle, gry
 [3766]
• Literatura w języku ukraińskim
 [255]
• Art. papiernicze i szkolne
 [7810]
Kategorie szczegółowe BISAC

Hands-on Guide to Apache Spark 3: Build Scalable Computing Engine for Batch and Stream Data Processing

ISBN-13: 9781484293799 / Angielski

Alfonso Antolínez García
Hands-on Guide to Apache Spark 3: Build Scalable Computing Engine for Batch and Stream Data Processing Alfonso Antol?ne 9781484293799 Apress - książkaWidoczna okładka, to zdjęcie poglądowe, a rzeczywista szata graficzna może różnić się od prezentowanej.

Hands-on Guide to Apache Spark 3: Build Scalable Computing Engine for Batch and Stream Data Processing

ISBN-13: 9781484293799 / Angielski

Alfonso Antolínez García
cena 261,02
(netto: 248,59 VAT:  5%)

Najniższa cena z 30 dni: 250,57
Termin realizacji zamówienia:
ok. 22 dni roboczych
Dostawa w 2026 r.

Darmowa dostawa!

Beginning-Intermediate user level

This book explains how to scale Apache Spark 3 to handle massive amounts of data, either via batch or streaming processing. It covers how to use Spark’s structured APIs to perform complex data transformations and analyses you can use to implement end-to-end analytics workflows.
 
This book covers Spark 3's new features, theoretical foundations, and application architecture. The first section introduces the Apache Spark ecosystem as a unified engine for large scale data analytics, and shows you how to run and fine-tune your first application in Spark. The second section centers on batch processing suited to end-of-cycle processing, and data ingestion through files and databases. It explains Spark DataFrame API as well as structured and unstructured data with Apache Spark. The last section deals with scalable, high-throughput, fault-tolerant streaming processing workloads to process real-time data. Here you'll learn about Apache Spark Streaming’s execution model, the architecture of Spark Streaming, monitoring, reporting, and recovering Spark streaming. A full chapter is devoted to future directions for Spark Streaming. With real-world use cases, code snippets, and notebooks hosted on GitHub, this book will give you an understanding of large-scale data analysis concepts--and help you put them to use.

Upon completing this book, you will have the knowledge and skills to seamlessly implement large-scale batch and streaming workloads to analyze real-time data streams with Apache Spark.

What You Will Learn
  • Master the concepts of Spark clusters and batch data processing
  • Understand data ingestion, transformation, and data storage
  • Gain insight into essential stream processing concepts and different streaming architectures
  • Implement streaming jobs and applications with Spark Streaming

Who This Book Is For
Data engineers, data analysts, machine learning engineers, Python and R programmers

Kategorie:
Informatyka, Bazy danych
Kategorie BISAC:
Computers > Information Theory
Computers > Artificial Intelligence - General
Computers > Languages - Python
Wydawca:
Apress
Język:
Angielski
ISBN-13:
9781484293799

Part I. Apache  Spark Batch Data Processing

Chapter 1: Introduction to Apache Spark for Large-Scale Data Analytics
1.1. What is Apache Spark?        
1.2. Spark Unified Analytics
1.3. Batch vs Streaming Data
1.4. Spark Ecosystem

Chapter 2: Getting Started with Apache Spark
2.2. Scala and PySpark Interfaces
2.3. Spark Application Concepts
2.4. Transformations and Actions in Apache Spark
2.5. Lazy Evaluation in Apache Spark
2.6. First Application in Spark
2.7. Apache Spark Web UI

Chapter 3: Spark Dataframe API

Chapter 4: Spark Dataset API

Chapter 5: Structured and Unstructured Data with Apache Spark
5.1. Data Sources
5.2. Generic Load/Save Functions
5.3. Generic File Source Options
5.4. Parquet Files
5.5. ORC Files
5.6. JSON Files
5.7. CSV Files
5.8. Text Files
5.9. Hive Tables
5.10. JDBC To Other Databases

Chapter 6: Spark Machine Learning with MLlib

Part II. Spark Data Streaming
Chapter 7: Introduction to Apache Spark Streaming
7.1. Apache Spark Streaming’s Execution Model
7.2. Stream Processing Architectures
7.3. Architecture of Spark Streaming: Discretized Streams
7.4. Benefits of Discretized Stream Processing
7.4.1. Dynamic Load Balancing
7.4.2. Fast Failure and Straggler Recovery

Chapter 8: Structured Streaming
8.1. Streaming Analytics
8.2. Connecting to a Stream
8.3. Preparing the Data in a Stream
8.4. Operations on a Streaming Dataset

Chapter 9: Structured Streaming Sources
9.1. File Sources
9.2. Apache Kafka Source
9.3. A Rate Source

Chapter 10: Structured Streaming Sinks
10.1. Output Modes
10.2. Output Sinks
10.3. File Sink
10.4. The Kafka Sink
10.5. The Memory Sink             
10.6. Streaming Table APIs
10.7. Triggers
10.8. Managing Streaming Queries
10.9. Monitoring Streaming Queries
10.9.1. Reading Metrics Interactively
10.9.2. Reporting Metrics programmatically using Asynchronous APIs
10.9.3. Reporting Metrics using Dropwizard
10.9.4. Recovering from Failures with Checkpointing
10.9.5. Recovery Semantics after Changes in a Streaming Query

Chapter 11: Future Directions for Spark Streaming
11.1. Backpressure
11.2. Dynamic Scaling
11.3. Event time and out-of-order data
11.4. UI enhancements
11.5. Continuous Processing

Chapter 12: Watermarks. A deep survey of temporal progress metrics


Alfonso Antolínez García is a senior IT manager with a long professional career serving in several multinational companies such as Bertelsmann SE, Lafarge, and TUI AG. He has been working in the media industry, the building materials industry, and the leisure industry. Alfonso also works as a university professor, teaching artificial intelligence, machine learning, and data science. In his spare time, he writes research papers on artificial intelligence, mathematics, physics, and the applications of information theory to other sciences.

This book explains how to scale Apache Spark 3 to handle massive amounts of data, either via batch or streaming processing. It covers how to use Spark’s structured APIs to perform complex data transformations and analyses you can use to implement end-to-end analytics workflows.

 
This book covers Spark 3's new features, theoretical foundations, and application architecture. The first section introduces the Apache Spark ecosystem as a unified engine for large scale data analytics, and shows you how to run and fine-tune your first application in Spark. The second section centers on batch processing suited to end-of-cycle processing, and data ingestion through files and databases. It explains Spark DataFrame API as well as structured and unstructured data with Apache Spark. The last section deals with scalable, high-throughput, fault-tolerant streaming processing workloads to process real-time data. Here you'll learn about Apache Spark Streaming’s execution model, the architecture of Spark Streaming, monitoring, reporting, and recovering Spark streaming. A full chapter is devoted to future directions for Spark Streaming. With real-world use cases, code snippets, and notebooks hosted on GitHub, this book will give you an understanding of large-scale data analysis concepts--and help you put them to use.

Upon completing this book, you will have the knowledge and skills to seamlessly implement large-scale batch and streaming workloads to analyze real-time data streams with Apache Spark.

You will:
  • Master the concepts of Spark clusters and batch data processing
  • Understand data ingestion, transformation, and data storage
  • Gain insight into essential stream processing concepts and different streaming architectures
  • Implement streaming jobs and applications with Spark Streaming



Udostępnij

Facebook - konto krainaksiazek.pl



Opinie o Krainaksiazek.pl na Opineo.pl

Partner Mybenefit

Krainaksiazek.pl w programie rzetelna firma Krainaksiaze.pl - płatności przez paypal

Czytaj nas na:

Facebook - krainaksiazek.pl
  • książki na zamówienie
  • granty
  • książka na prezent
  • kontakt
  • pomoc
  • opinie
  • regulamin
  • polityka prywatności

Zobacz:

  • Księgarnia czeska

  • Wydawnictwo Książkowe Klimaty

1997-2025 DolnySlask.com Agencja Internetowa

© 1997-2022 krainaksiazek.pl
     
KONTAKT | REGULAMIN | POLITYKA PRYWATNOŚCI | USTAWIENIA PRYWATNOŚCI
Zobacz: Księgarnia Czeska | Wydawnictwo Książkowe Klimaty | Mapa strony | Lista autorów
KrainaKsiazek.PL - Księgarnia Internetowa
Polityka prywatnosci - link
Krainaksiazek.pl - płatnośc Przelewy24
Przechowalnia Przechowalnia