01Contextual guidance for deploying training to Hugging Face, RunPod, or local hardware.
02Multi-mode configuration including 'Sensible Defaults' and a 'Guide Me' wizard.
03Optimized 4-bit and 8-bit quantization setups for significant VRAM reduction.
044 GitHub stars
05Automated generation of SFT, DPO, and GRPO training scripts and Jupyter notebooks.
06Support for leading model families including Llama, Qwen, Mistral, and DeepSeek.