Executes system commands based on client queries using an LLM for command generation.
This project implements a client-server system where a client sends a query to the server. The server utilizes an Ollama LLM to process the query, generate an appropriate system command, execute that command using subprocess, and return the output to the client. The system is built using FastAPI, FastMCP, aiohttp, and standard Python libraries.