01RAG-powered autonomous anomaly investigation using local LLMs (vLLM, LangChain) and vector search (Qdrant)
02Automated model retraining (Airflow) and lifecycle management (MLflow) with continuous deployment of 'latest' models
031 GitHub stars
04Immediate, detailed anomaly investigation reports delivered to operators via Slack for human-in-the-loop response
05Dual-Pipeline Feature Store (Feast) for consistent feature serving and historical point-in-time joins
06Real-time streaming inference with sub-second latency via Redpanda and QuixStreams