Monitors and analyzes dbt execution metadata to track model performance, test reliability, and data quality metrics over time.
Leverage the dbt Artifacts package to capture comprehensive execution metadata across any dbt version or platform. This skill enables AI agents to help you monitor test and model execution history, identify performance bottlenecks, and maintain long-term observability of your data pipelines. By storing run details in queryable tables, it facilitates programmatic access to execution metadata, helping teams compare historical patterns, automate alerting, and optimize Snowflake resources effectively.
主な機能
01Offers guidance on storage management and long-term archival strategies
02Enables cross-platform execution logging for dbt Cloud, Airflow, and local CLI
03Provides SQL templates for monitoring test reliability and performance regressions
0423 GitHub stars
05Captures detailed runtime metrics for models, tests, seeds, and snapshots
06Facilitates deep-dive analysis of dbt run patterns and invocation history
ユースケース
01Setting up centralized dbt monitoring across multiple deployment environments
02Investigating slow-running models or flaky tests in historical execution data
03Creating long-term performance baselines and data quality dashboards