01Built-in retry and fallback logic for resilient API integration
023 GitHub stars
03Native support for local llamafile and Ollama server routing
04Automatic usage monitoring and cost tracking across all providers
05Standardized exception mapping to OpenAI-style error types
06Unified completion() function for 100+ cloud and local LLM providers