Retrievers
Collection
3 items
β’
Updated
β’
1
This is vocabulary pruned version of intfloat/multilingual-e5-base.
Uses only russian and english tokens.
| intfloat/multilingual-e5-base | d0rj/e5-base-en-ru | |
|---|---|---|
| Model size (MB) | 1060.65 | 504.89 |
| Params (count) | 278,043,648 | 132,354,048 |
| Word embeddings dim | 192,001,536 | 46,311,936 |
Performance on SberQuAD dev benchmark.
| Metric on SberQuAD (4122 questions) | intfloat/multilingual-e5-base | d0rj/e5-base-en-ru |
|---|---|---|
| recall@3 | ||
| map@3 | ||
| mrr@3 | ||
| recall@5 | ||
| map@5 | ||
| mrr@5 | ||
| recall@10 | ||
| map@10 | ||
| mrr@10 |
Use dot product distance for retrieval.
Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
import torch.nn.functional as F
from torch import Tensor
from transformers import XLMRobertaTokenizer, XLMRobertaModel
def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
'query: How does a corporate website differ from a business card website?',
'query: ΠΠ΄Π΅ Π±ΡΠ» ΡΠΎΠ·Π΄Π°Π½ ΠΏΠ΅ΡΠ²ΡΠΉ ΡΡΠΎΠ»Π»Π΅ΠΉΠ±ΡΡ?',
'passage: The first trolleybus was created in Germany by engineer Werner von Siemens, probably influenced by the idea of his brother, Dr. Wilhelm Siemens, who lived in England, expressed on May 18, 1881 at the twenty-second meeting of the Royal Scientific Society. The electrical circuit was carried out by an eight-wheeled cart (Kontaktwagen) rolling along two parallel contact wires. The wires were located quite close to each other, and in strong winds they often overlapped, which led to short circuits. An experimental trolleybus line with a length of 540 m (591 yards), opened by Siemens & Halske in the Berlin suburb of Halensee, operated from April 29 to June 13, 1882.',
'passage: ΠΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½ΡΠΉ ΡΠ°ΠΉΡ β ΡΠΎΠ΄Π΅ΡΠΆΠΈΡ ΠΏΠΎΠ»Π½ΡΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΡ ΠΎ ΠΊΠΎΠΌΠΏΠ°Π½ΠΈΠΈ-Π²Π»Π°Π΄Π΅Π»ΡΡΠ΅, ΡΡΠ»ΡΠ³Π°Ρ
/ΠΏΡΠΎΠ΄ΡΠΊΡΠΈΠΈ, ΡΠΎΠ±ΡΡΠΈΡΡ
Π² ΠΆΠΈΠ·Π½ΠΈ ΠΊΠΎΠΌΠΏΠ°Π½ΠΈΠΈ. ΠΡΠ»ΠΈΡΠ°Π΅ΡΡΡ ΠΎΡ ΡΠ°ΠΉΡΠ°-Π²ΠΈΠ·ΠΈΡΠΊΠΈ ΠΈ ΠΏΡΠ΅Π΄ΡΡΠ°Π²ΠΈΡΠ΅Π»ΡΡΠΊΠΎΠ³ΠΎ ΡΠ°ΠΉΡΠ° ΠΏΠΎΠ»Π½ΠΎΡΠΎΠΉ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½Π½ΠΎΠΉ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ, Π·Π°ΡΠ°ΡΡΡΡ ΡΠΎΠ΄Π΅ΡΠΆΠΈΡ ΡΠ°Π·Π»ΠΈΡΠ½ΡΠ΅ ΡΡΠ½ΠΊΡΠΈΠΎΠ½Π°Π»ΡΠ½ΡΠ΅ ΠΈΠ½ΡΡΡΡΠΌΠ΅Π½ΡΡ Π΄Π»Ρ ΡΠ°Π±ΠΎΡΡ Ρ ΠΊΠΎΠ½ΡΠ΅Π½ΡΠΎΠΌ (ΠΏΠΎΠΈΡΠΊ ΠΈ ΡΠΈΠ»ΡΡΡΡ, ΠΊΠ°Π»Π΅Π½Π΄Π°ΡΠΈ ΡΠΎΠ±ΡΡΠΈΠΉ, ΡΠΎΡΠΎΠ³Π°Π»Π΅ΡΠ΅ΠΈ, ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½ΡΠ΅ Π±Π»ΠΎΠ³ΠΈ, ΡΠΎΡΡΠΌΡ). ΠΠΎΠΆΠ΅Ρ Π±ΡΡΡ ΠΈΠ½ΡΠ΅Π³ΡΠΈΡΠΎΠ²Π°Π½ Ρ Π²Π½ΡΡΡΠ΅Π½Π½ΠΈΠΌΠΈ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΠΌΠΈ ΡΠΈΡΡΠ΅ΠΌΠ°ΠΌΠΈ ΠΊΠΎΠΌΠΏΠ°Π½ΠΈΠΈ-Π²Π»Π°Π΄Π΅Π»ΡΡΠ° (ΠΠΠ‘, CRM, Π±ΡΡ
Π³Π°Π»ΡΠ΅ΡΡΠΊΠΈΠΌΠΈ ΡΠΈΡΡΠ΅ΠΌΠ°ΠΌΠΈ). ΠΠΎΠΆΠ΅Ρ ΡΠΎΠ΄Π΅ΡΠΆΠ°ΡΡ Π·Π°ΠΊΡΡΡΡΠ΅ ΡΠ°Π·Π΄Π΅Π»Ρ Π΄Π»Ρ ΡΠ΅Ρ
ΠΈΠ»ΠΈ ΠΈΠ½ΡΡ
Π³ΡΡΠΏΠΏ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Π΅ΠΉ β ΡΠΎΡΡΡΠ΄Π½ΠΈΠΊΠΎΠ², Π΄ΠΈΠ»Π΅ΡΠΎΠ², ΠΊΠΎΠ½ΡΡΠ°Π³Π΅Π½ΡΠΎΠ² ΠΈ ΠΏΡ.',
]
tokenizer = XLMRobertaTokenizer.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
model = XLMRobertaModel.from_pretrained('d0rj/e5-base-en-ru', use_cache=False)
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[68.59542846679688, 81.75910949707031], [80.36100769042969, 64.77748107910156]]
from transformers import pipeline
pipe = pipeline('feature-extraction', model='d0rj/e5-base-en-ru')
embeddings = pipe(input_texts, return_tensors=True)
embeddings[0].size()
# torch.Size([1, 17, 1024])
from sentence_transformers import SentenceTransformer
sentences = [
'query: Π§ΡΠΎ ΡΠ°ΠΊΠΎΠ΅ ΠΊΡΡΠ³Π»ΡΠ΅ ΡΠ΅Π½Π·ΠΎΡΡ?',
'passage: Abstract: we introduce a novel method for compressing round tensors based on their inherent radial symmetry. We start by generalising PCA and eigen decomposition on round tensors...',
]
model = SentenceTransformer('d0rj/e5-base-en-ru')
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings.size()
# torch.Size([2, 1024])