File size: 4,051 Bytes
fa15921
 
 
 
 
 
 
 
 
 
 
 
 
 
3a32c99
fa15921
3a32c99
fa15921
3a32c99
 
 
 
 
 
 
 
fa15921
 
 
 
 
 
 
 
 
 
 
 
 
 
3a32c99
fa15921
 
 
3a32c99
fa15921
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3a32c99
fa15921
 
 
3a32c99
 
fa15921
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
878f739
fa15921
 
 
878f739
 
fa15921
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
pipeline_tag: sentence-similarity
language:
- pl
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- ipipan/polqa
- ipipan/maupqa
---

# Silver Retriever Base (v1)

Silver Retriever model encodes the Polish sentences or paragraphs into a 768-dimensional dense vector space and can be used for tasks like document retrieval or semantic search.

It was initialized from the [HerBERT-base](https://huggingface.co/allegro/herbert-base-cased) model and fine-tuned on the [PolQA](https://huggingface.co/ipipan/polqa) and [MAUPQA](https://huggingface.co/ipipan/maupqa) datasets for 15,000 steps with a batch size of 1,024.

## Preparing inputs

The model was trained on question-passage pairs and works best when the input is the same format as that used during training:
- We added the phrase `Pytanie:' to the beginning of the question.
- The training passages consisted of `title` and `text` concatenated with the special token `</s>`. Even if your passages don't have a `title`, it is still beneficial to prefix a passage with the `</s>` token.
- Although we used the dot product during training, the model usually works better with the cosine distance.

## Usage (Sentence-Transformers)

Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:

```
pip install -U sentence-transformers
```

Then you can use the model like this:

```python
from sentence_transformers import SentenceTransformer
sentences = [
    "Pytanie: W jakim mieście urodził się Zbigniew Herbert?", 
    "Zbigniew Herbert</s>Zbigniew Bolesław Ryszard Herbert (ur. 29 października 1924 we Lwowie, zm. 28 lipca 1998 w Warszawie) – polski poeta, eseista i dramaturg.",
]

model = SentenceTransformer('ipipan/silver-retriever-base-v1')
embeddings = model.encode(sentences)
print(embeddings)
```

## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

```python
from transformers import AutoTokenizer, AutoModel
import torch


def cls_pooling(model_output, attention_mask):
    return model_output[0][:,0]


# Sentences we want sentence embeddings for
sentences = [
    "Pytanie: W jakim mieście urodził się Zbigniew Herbert?", 
    "Zbigniew Herbert</s>Zbigniew Bolesław Ryszard Herbert (ur. 29 października 1924 we Lwowie, zm. 28 lipca 1998 w Warszawie) – polski poeta, eseista i dramaturg.",
]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ipipan/silver-retriever-base-v1')
model = AutoModel.from_pretrained('ipipan/silver-retriever-base-v1')

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)
```

## Full Model Architecture
```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```

## Additional Information

### Model Creators

The model was created by Piotr Rybak from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/).

This work was supported by the European Regional Development Fund as a part of 2014–2020 Smart Growth Operational Programme, CLARIN — Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.

### Licensing Information

[More Information Needed]

### Citation Information

[More Information Needed]