@@ -71,17 +71,19 @@ Below are some examples of the currently supported models:
71
71
72
72
| MTEB Rank | Model Size | Model Type | Model ID |
73
73
| -----------| ---------------------| -------------| --------------------------------------------------------------------------------------------------|
74
- | 1 | 7B (Very Expensive) | Mistral | [ Salesforce/SFR-Embedding-2_R] ( https://hf.co/Salesforce/SFR-Embedding-2_R ) |
75
- | 2 | 7B (Very Expensive) | Qwen2 | [ Alibaba-NLP/gte-Qwen2-7B-instruct] ( https://hf.co/Alibaba-NLP/gte-Qwen2-7B-instruct ) |
76
- | 9 | 1.5B (Expensive) | Qwen2 | [ Alibaba-NLP/gte-Qwen2-1.5B-instruct] ( https://hf.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct ) |
77
- | 15 | 0.4B | Alibaba GTE | [ Alibaba-NLP/gte-large-en-v1.5] ( https://hf.co/Alibaba-NLP/gte-large-en-v1.5 ) |
74
+ | 3 | 7B (Very Expensive) | Qwen2 | [ Alibaba-NLP/gte-Qwen2-7B-instruct] ( https://hf.co/Alibaba-NLP/gte-Qwen2-7B-instruct ) |
75
+ | 11 | 1.5B (Expensive) | Qwen2 | [ Alibaba-NLP/gte-Qwen2-1.5B-instruct] ( https://hf.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct ) |
76
+ | 14 | 7B (Very Expensive) | Mistral | [ Salesforce/SFR-Embedding-2_R] ( https://hf.co/Salesforce/SFR-Embedding-2_R ) |
78
77
| 20 | 0.3B | Bert | [ WhereIsAI/UAE-Large-V1] ( https://hf.co/WhereIsAI/UAE-Large-V1 ) |
79
- | 24 | 0.5B | XLM-RoBERTa | [ intfloat/multilingual-e5-large-instruct] ( https://hf.co/intfloat/multilingual-e5-large-instruct ) |
78
+ | 31 | 0.5B | XLM-RoBERTa | [ Snowflake/snowflake-arctic-embed-l-v2.0] ( https://hf.co/Snowflake/snowflake-arctic-embed-l-v2.0 ) |
79
+ | 37 | 0.3B | Alibaba GTE | [ Snowflake/snowflake-arctic-embed-m-v2.0] ( https://hf.co/Snowflake/snowflake-arctic-embed-m-v2.0 ) |
80
+ | 49 | 0.5B | XLM-RoBERTa | [ intfloat/multilingual-e5-large-instruct] ( https://hf.co/intfloat/multilingual-e5-large-instruct ) |
81
+ | N/A | 0.4B | Alibaba GTE | [ Alibaba-NLP/gte-large-en-v1.5] ( https://hf.co/Alibaba-NLP/gte-large-en-v1.5 ) |
80
82
| N/A | 0.1B | NomicBert | [ nomic-ai/nomic-embed-text-v1] ( https://hf.co/nomic-ai/nomic-embed-text-v1 ) |
81
83
| N/A | 0.1B | NomicBert | [ nomic-ai/nomic-embed-text-v1.5] ( https://hf.co/nomic-ai/nomic-embed-text-v1.5 ) |
82
84
| N/A | 0.1B | JinaBERT | [ jinaai/jina-embeddings-v2-base-en] ( https://hf.co/jinaai/jina-embeddings-v2-base-en ) |
83
85
| N/A | 0.1B | JinaBERT | [ jinaai/jina-embeddings-v2-base-code] ( https://hf.co/jinaai/jina-embeddings-v2-base-code ) |
84
- | N/A | 0.1B | MPNet | [ sentence-transformers/all-mpnet-base-v2] ( https://hf.co/sentence-transformers/all-mpnet-base-v2 ) |
86
+ | N/A | 0.1B | MPNet | [ sentence-transformers/all-mpnet-base-v2] ( https://hf.co/sentence-transformers/all-mpnet-base-v2 ) |
85
87
86
88
To explore the list of best performing text embeddings models, visit the
87
89
[ Massive Text Embedding Benchmark (MTEB) Leaderboard] ( https://huggingface.co/spaces/mteb/leaderboard ) .
0 commit comments