关于
This skill provides a comprehensive framework for managing the embedding lifecycle in LLM applications. It guides developers through selecting the right model—ranging from high-accuracy proprietary APIs like OpenAI and Voyage to lightweight, open-source local options like BGE. It includes production-ready templates for advanced chunking techniques such as recursive character splitting and semantic sectioning, ensuring context is preserved for better retrieval. Whether you are building a document search engine or fine-tuning for specialized domains like legal or code, this skill provides the implementation patterns needed for accurate, cost-effective, and scalable vector search.