Acerca de
This skill provides a comprehensive framework for implementing high-performance vector search within LLM applications. It offers guidance on selecting the right embedding models (such as OpenAI, Voyage, or local BGE models), implementing sophisticated chunking strategies like recursive character or semantic splitting, and optimizing embedding quality for domain-specific data. Whether you are building a production-grade RAG pipeline or a specialized code search tool, this skill helps improve retrieval accuracy while managing costs and latency.