01Deploys multiple LLM inference engines (Ollama, llama.cpp, vLLM)
02Integrates KServe for Kubernetes-native ML serving with custom CRDs
03Provides a pre-configured AI development workspace with NVIDIA GPU support
041 GitHub stars
05Manages AI workloads on Kubernetes using GitOps with Flux CD
06Includes web interfaces for AI interaction (Open WebUI, ComfyUI)