概要
The Pipeline Design skill empowers developers and data engineers to architect high-performance data integration workflows using industry-standard ETL and ELT patterns. It provides actionable guidance on batch and streaming architectures, ensuring data integrity through idempotent operations, incremental loading strategies, and structured staging patterns. Whether you are building real-time event-driven systems with Kafka or batch processing for modern data warehouses like Snowflake and BigQuery, this skill helps implement reliable error handling and observability to maintain production-grade data pipelines.