概要
The funsloth-train skill streamlines the process of setting up Large Language Model (LLM) fine-tuning by generating tailored Unsloth notebooks and scripts. It simplifies complex configurations for SFT, DPO, and GRPO training through three intuitive interaction modes: production-ready defaults, guided setup for beginners, or interactive widget-based notebooks. By automating the selection of optimal parameters like LoRA rank, quantization, and learning rates across popular model families like Llama 3.1 and Mistral, it enables developers to move from raw data to a trained model faster while ensuring maximum VRAM efficiency.