Executes system commands based on client queries using an LLM for command generation.
This project implements a client-server system where a client sends a query to the server. The server utilizes an Ollama LLM to process the query, generate an appropriate system command, execute that command using subprocess, and return the output to the client. The system is built using FastAPI, FastMCP, aiohttp, and standard Python libraries.
主要功能
01Built with FastAPI and FastMCP.
02Processes client queries using an LLM.
03Uses aiohttp for asynchronous communication.
040 GitHub stars
05Executes system commands using subprocess.
06Generates system commands based on LLM output.
使用案例
01Automated system administration tasks.
02Remote command execution based on natural language queries.