Manages local Large Language Models and LM Studio server operations through the lms command-line interface.
The lms-command-guide skill provides comprehensive support for managing local LLM environments using the LM Studio CLI. It enables developers to seamlessly list, load, and unload models, manage local OpenAI-compatible API servers, and download specific model quantizations directly from the terminal. By integrating real-time log monitoring and runtime management, this skill streamlines the workflow for developers building AI-powered applications locally, ensuring efficient resource usage and faster debugging without the need for a graphical interface.
主な機能
01Model lifecycle management (load, unload, list, and chat)
02Hardware-specific runtime configuration and engine updates
03Real-time request/response log streaming for API debugging
04Local API server control (start, stop, and status monitoring)
05Automated model fetching and importing from HuggingFace Hub
063 GitHub stars
ユースケース
01Automating local LLM server deployment for backend development
02Scripting model downloads and server initialization in development environments
03Comparing performance across different quantized model versions via CLI