File size: 4,500 Bytes
950ea18
 
 
 
 
 
 
3099117
 
8556ec5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
950ea18
 
75a8974
3099117
5213b74
 
b1e856a
950ea18
974343a
 
 
 
 
 
 
 
 
 
 
950ea18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: cc-by-4.0
language: hi
widget:
- source_sentence: "एक आदमी एक रस्सी पर चढ़ रहा है"
  sentences:
    - "एक आदमी एक रस्सी पर चढ़ता है" 
    - "एक आदमी एक दीवार पर चढ़ रहा है"
    - "एक आदमी बांसुरी बजाता है"
  example_title: "Example 1"

- source_sentence: "कुछ लोग गा रहे हैं"
  sentences:
    - "लोगों का एक समूह गाता है"
    - "बिल्ली दूध पी रही है"
    - "दो आदमी लड़ रहे हैं"
  example_title: "Example 2"

- source_sentence: "फेडरर ने 7वां विंबलडन खिताब जीत लिया है"
  sentences:
    - "फेडरर अपने करियर में कुल 20 ग्रैंडस्लैम खिताब जीत चुके है "
    - "फेडरर ने सितंबर में अपने निवृत्ति की घोषणा की"
    - "एक आदमी कुछ खाना पकाने का तेल एक बर्तन में डालता है"
  example_title: "Example 3"
---

# HindSBERT-STS

This is a HindSBERT model (l3cube-pune/hindi-sentence-bert-nli) fine-tuned on the STS dataset. <br>
Released as a part of project MahaNLP : https://github.com/l3cube-pune/MarathiNLP <br>
A multilingual version of this model supporting major Indic languages and cross-lingual sentence similarity is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> indic-sentence-similarity-sbert </a> <br>

More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2211.11187) 

```
@article{joshi2022l3cubemahasbert,
  title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi},
  author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj},
  journal={arXiv preprint arXiv:2211.11187},
  year={2022}
}
```

This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.

<!--- Describe your model here -->

## Usage (Sentence-Transformers)

Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:

```
pip install -U sentence-transformers
```

Then you can use the model like this:

```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```



## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

```python
from transformers import AutoTokenizer, AutoModel
import torch


#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)
```