The Azure Data Lakehouse Toolkit: Building and Scaling Data Lakehouses on Azure with Delta Lake, Apache Spark, Databricks, Synapse Analytics, and Snow » książka
13. Dynamic Partition Pruning for Querying Star Schemas
14. Z-Ordering and Data Skipping
15. Adaptive Query Execution
16. Bloom Filter Index
17. Hyperspace
Part VI. Lakehouse Capabilities
18. Auto Loader Resource Management
19. Advanced Schema Evolution with Auto Loader
20. Python Wheels
21. Security and Controls
22. Unity Catalog
Ron C. L’Esteve is a professional author, trusted technology leader, and digital innovation strategist residing in Chicago, IL, USA. He is well-known for his impactful books and award-winning article publications about Azure Data & AI Architecture and Engineering. He possesses deep technical skills and experience in designing, implementing, and delivering modern Azure Data & AI projects for numerous clients around the world.
Having several Azure Data, AI, and Lakehouse certifications under his belt, Ron has been a go-to technical advisor for some of the largest and most impactful Azure implementation projects on the planet. He has been responsible for scaling key data architectures, defining the road map and strategy for the future of data and business intelligence needs, and challenging customers to grow by thoroughly understanding the fluid business opportunities and enabling change by translating them into high-quality and sustainable technical solutions that solve the most complex challenges and promote digital innovation and transformation.
Ron is a gifted presenter and trainer, known for his innate ability to clearly articulate and explain complex topics to audiences of all skill levels. He applies a practical and business-oriented approach by taking transformational ideas from concept to scale. He is a true enabler of positive and impactful change by championing a growth mindset.
Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.
The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.
After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.
What You Will Learn
Implement the Data Lakehouse Paradigm on Microsoft’s Azure cloud platform
Benefit from the new Delta Lake open-source storage layer for data lakehouses
Take advantage of schema evolution, change feeds, live tables, and more
Write functional PySpark code for data lakehouse ELT jobs
Optimize Apache Spark performance through partitioning, indexing, and other tuning options
Choose between alternatives such as Databricks, Synapse Analytics, and Snowflake