Code Monkey home page Code Monkey logo

data-engineering-zoomcamp's Introduction

Data Engineering Zoomcamp

Syllabus

Note: This is preliminary and may change

Week 1: Introduction & Prerequisites

  • Course overview
  • Introduction to GCP
  • Docker and docker-compose
  • Running Postgres locally with Docker
  • Setting up infrastructure on GCP with Terraform
  • Preparing the environment for the course
  • Homework

Details

Duration: 2-2.5h

Week 2: Data ingestion

Goal: Orchestrating a job to ingest web data to a Data Lake in its raw form.

Instructor: Sejal & Alexey

  • Data Lake (GCS) -- 10 mins

    • Basics, What is a Data Lake
    • ELT vs. ETL
    • Alternatives to components (S3/HDFS, Redshift, Snowflake etc.)
  • Orchestration (Airflow) -- 15 mins

    • Basics
      • What is an Orchestration Pipeline?
      • What is a DAG?
  • Demo:

    • Setup: (15 mins)
      • Docker pre-reqs (refresher)
      • Airflow env with Docker
    • Data ingestion DAG - Demo (30 mins):
      • Extraction: Download and unpack the data
      • Pre-processing: Convert this raw data to parquet, partition (raw/yy/mm/dd)
      • Load: Raw data to GCS
      • Exploration: External Table for BigQuery -- Taking a look at the data
      • Further Enhancements: Transfer Service (AWS -> GCP)

Duration: 1.5h

Week 3: Data Warehouse

Goal: Structuring data into a Data Warehouse

Instructor: Ankush

  • Data warehouse (BigQuery) (25 minutes)
    • What is a data warehouse solution
    • What is big query, why is it so fast, Cost of BQ, (5 min)
    • Partitoning and clustering, Automatic re-clustering (10 min)
    • Pointing to a location in google storage (5 min)
    • Loading data to big query & PG (10 min) -- using Airflow operator?
    • BQ best practices
    • Misc: BQ Geo location, BQ ML
    • Alternatives (Snowflake/Redshift)

Duration: 1-1.5h

Week 4: Analytics engineering

Goal: Transforming Data in DWH to Analytical Views

Instructor: Victoria

  • Basics (15 mins)
    • What is DBT?
    • ETL vs ELT
    • Data modeling
    • DBT fit of the tool in the tech stack
  • Usage (Combination of coding + theory) (1:30-1:45 mins)
    • Anatomy of a dbt model: written code vs compiled Sources
    • Materialisations: table, view, incremental, ephemeral
    • Seeds
    • Sources and ref
    • Jinja and Macros
    • Tests
    • Documentation
    • Packages
    • Deployment: local development vs production
    • DBT cloud: scheduler, sources and data catalog (Airflow)
  • Google data studio -> Dashboard
  • Extra knowledge:
    • DBT cli (local)

Duration: 2h

Week 5: Batch processing

Goal:

Instructor: Alexey

  • Distributed processing (Spark) (40 + ? minutes)
    • What is Spark, spark cluster (5 mins)
    • Explaining potential of Spark (10 mins)
    • What is broadcast variables, partitioning, shuffle (10 mins)
    • Pre-joining data (10 mins)
    • use-case
    • What else is out there (Flink) (5 mins)
  • Extending Orchestration env (airflow) (30 minutes)
    • Big query on airflow (10 mins)
    • Spark on airflow (10 mins)

Duration: 1-1.5h

Week 6: Streaming

Goal:

Instructor: Ankush

  • Basics
    • What is Kafka
    • Internals of Kafka, broker
    • Partitoning of Kafka topic
    • Replication of Kafka topic
  • Consumer-producer
  • Schemas (avro)
  • Streaming
    • Kafka streams
  • Kafka connect
  • Alternatives (PubSub/Pulsar)

Duration: 1.5h

Week 7, 8 & 9: Project

  • Putting everything we learned to practice

Duration: 2-3 weeks

  • Upcoming buzzwords
    • Delta Lake/Lakehouse
    • Databricks
    • Apache iceberg
    • Apache hudi
    • Data mesh
    • KSQLDB
    • Streaming analytics
    • Mlops

Duration: 30 mins

Overview

Architecture diagram

Technologies

  • Google Cloud Platform (GCP): Cloud-based auto-scaling platform by Google
    • Google Cloud Storage (GCS): Data Lake
    • BigQuery: Data Warehouse
  • Terraform: Infrastructure-as-Code (IaC)
  • Docker: Containerization
  • SQL: Data Analysis & Exploration
  • Airflow: Pipeline Orchestration
  • DBT: Data Transformation
  • Spark: Distributed Processing
  • Kafka: Streaming

Prerequisites

To get most out of this course, you should feel comfortable with coding and command line, and know the basics of SQL. Prior experience with Python will be helpful, but you can pick Python relatively fast if you have experience with other programming languages.

Prior experience with data engineering is not required.

Instructors

FAQ

  • Q: I registered, but haven't received a confirmation email. Is it normal? A: Yes, it's normal. It's not automated. But you will receive an email eventually
  • Q: At what time of the day will it happen? A: Office hours will happen on Mondays at 17:00 CET. But everything will be recorded, so you can watch it whenever it's convenient for you
  • Q: Will there be a certificate? A: Yes, if you complete the project
  • Q: I'm 100% not sure I'll be able to attend. Can I still sign up? A: Yes, please do! You'll receive all the updates and then you can watch the course at your own pace.
  • Q: Do you plan to run a ML engineering course as well? A: Glad you asked. We do :)

data-engineering-zoomcamp's People

Contributors

alexeygrigorev avatar ankushkhanna avatar itnadigital avatar mccurcio avatar sejalv avatar victoriapm avatar ziritrion avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.