Manage, schedule, and monitor Kubeflow-based Data Science Pipelines on Red Hat OpenShift AI.
This skill enables developers and data scientists to orchestrate the full lifecycle of Data Science Pipelines (Kubeflow 2.0) directly through Claude. It automates the setup of pipeline servers (DSPA), submits execution runs from YAML or JSON definitions, and configures recurring schedules using cron expressions. Users can monitor real-time execution status, perform deep diagnostics on failing steps, and retrieve container logs, ensuring a seamless MLOps workflow within Red Hat OpenShift AI environments.
主要功能
01Pipeline run submission from YAML/JSON with parameter injection
02Integrated log retrieval and resource diagnostics for troubleshooting
03Automated Pipeline Server (DSPA) setup and health verification
04Recurring workflow scheduling using cron expressions
05Real-time step-level progress monitoring and status tracking
065 GitHub stars
使用场景
01Provisioning pipeline infrastructure for new Data Science Projects
02Automating recurring machine learning model training workflows
03Debugging complex pipeline failures by inspecting step-level logs