With the rapid growth of big data and AI, organizations are quickly building data products and solutions in an ad-hoc manner. But as these data organizations mature, it's apparent that their analysis and machine learning models are only as reliable as the data they're built upon. The solution? Delta Lake, an open-source format that enables building a lakehouse architecture on top of existing storage systems such as S3, ADLS, and GCS.
In this practical book, author Bennie Haelen shows data engineers, data scientists, and data analysts how to get Delta Lake and its unique features up and running. The ultimate goal of building data pipelines and applications is to query processed data and gain insights from it. You'll learn how the choice of storage solution determines the robustness and performance of the data pipeline, from raw data to insights.
With this book, you will:
- Use modern data management and data engineering techniques
- Understand how ACID transactions bring reliability to data lakes at scale
- Run streaming and batch jobs against your data lake concurrently
- Execute update, delete, and merge commands against your data lake
- Use time travel to roll back and examine previous versions of your data
- Build a streaming data quality pipeline following the medallion architecture