Builds a fully serverless LLM agent that dynamically manages memory and context using AWS Bedrock, Lambda-hosted MCP servers, and S3 Vectors.
This project provides a comprehensive serverless architecture for a Bedrock-powered LLM agent. It uniquely integrates MCP servers, hosted entirely on AWS Lambda, to handle sophisticated memory and context management, allowing the agent to recall relevant past memories or store new information for context-aware responses. Leveraging AWS Bedrock for language understanding and embeddings, S3 Vector tables for persistent semantic memory, and orchestrated with Python CDK, it offers a scalable and cost-effective solution for deploying intelligent AI agents without managing dedicated infrastructure.
주요 기능
01Full infrastructure orchestration via Python CDK
02Serverless LLM agent powered by AWS Bedrock
03Lambda-hosted MCP for dynamic memory and context management
04Persistent semantic memory using S3 Vector Store
05Secure API Gateway endpoints with API key authentication
060 GitHub stars
사용 사례
01Deploying intelligent, context-aware AI agents on AWS
02Managing long-term memory and conversational context for LLM applications
03Building scalable serverless AI solutions with integrated memory features