Sentence Transformers
CreatedUKPLab
Compute text embeddings for semantic search, textual similarity, and paraphrase mining.
About
Sentence Transformers provides an easy method to compute embeddings for accessing, using, and training state-of-the-art embedding and reranker models. It allows users to generate embeddings using Sentence Transformer models or to calculate similarity scores using Cross-Encoder models, unlocking applications like semantic search, semantic textual similarity, and paraphrase mining. With a wide selection of pre-trained models and the ability to train custom models, Sentence Transformers offers a versatile solution for various natural language understanding tasks.
Key Features
- Compute embeddings using Sentence Transformer models
- 16,712 GitHub stars
- Supports various transformer networks including BERT, RoBERTa, XLM-R, DistilBERT, Electra, BART
- Access over 10,000 pre-trained models on Hugging Face
- Calculate similarity scores using Cross-Encoder (reranker) models
- Train or finetune custom embedding and reranker models
Use Cases
- Semantic Search
- Semantic Textual Similarity
- Paraphrase Mining