Provides essential infrastructure for autonomous agents, including intelligent LLM recommendations, real-time availability monitoring, and memory compression.
GetKin offers a comprehensive agent infrastructure server designed to empower autonomous agents with six powerful tools. It enables agents to intelligently select the optimal LLM for any task from a wide range of providers, monitor real-time API availability and latency, and efficiently route chat completions through an OpenAI-compatible proxy. Furthermore, GetKin includes advanced memory compression capabilities to structure agent session dumps into a multi-tier memory system, alongside features for listing models and checking memory usage.
主要功能
01Comprehensive listing of available LLM models with current pricing
02OpenAI-compatible LLM call routing proxy
03Structured 3-tier memory compression for agent session dumps
040 GitHub stars
05Optimal LLM recommendation across 17 models and 6 providers
06Real-time LLM provider API availability and latency monitoring
使用案例
01Optimizing LLM selection for cost or speed in autonomous agent tasks
02Ensuring reliable LLM API access through real-time availability checks
03Efficiently managing and structuring an agent's conversational memory