AnanyaCoder commited on
Commit
9015d84
1 Parent(s): 8fd03d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -39
README.md CHANGED
@@ -5,12 +5,17 @@ tags:
5
  - feature-extraction
6
  - sentence-similarity
7
  - transformers
 
 
 
8
 
9
  ---
10
 
11
- # {MODEL_NAME}
12
 
13
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
 
 
14
 
15
  <!--- Describe your model here -->
16
 
@@ -25,50 +30,29 @@ pip install -U sentence-transformers
25
  Then you can use the model like this:
26
 
27
  ```python
28
- from sentence_transformers import SentenceTransformer
29
- sentences = ["This is an example sentence", "Each sentence is converted"]
30
-
31
- model = SentenceTransformer('{MODEL_NAME}')
32
- embeddings = model.encode(sentences)
33
- print(embeddings)
34
- ```
35
 
36
 
 
37
 
38
- ## Usage (HuggingFace Transformers)
39
- Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
40
-
41
- ```python
42
- from transformers import AutoTokenizer, AutoModel
43
- import torch
44
-
45
 
46
- #Mean Pooling - Take attention mask into account for correct averaging
47
- def mean_pooling(model_output, attention_mask):
48
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
49
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
50
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
51
 
 
 
52
 
53
- # Sentences we want sentence embeddings for
54
- sentences = ['This is an example sentence', 'Each sentence is converted']
 
 
 
 
55
 
56
- # Load model from HuggingFace Hub
57
- tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
58
- model = AutoModel.from_pretrained('{MODEL_NAME}')
59
 
60
- # Tokenize sentences
61
- encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
62
 
63
- # Compute token embeddings
64
- with torch.no_grad():
65
- model_output = model(**encoded_input)
66
-
67
- # Perform pooling. In this case, mean pooling.
68
- sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
69
-
70
- print("Sentence embeddings:")
71
- print(sentence_embeddings)
72
  ```
73
 
74
 
@@ -77,7 +61,7 @@ print(sentence_embeddings)
77
 
78
  <!--- Describe how your model was evaluated -->
79
 
80
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
81
 
82
 
83
  ## Training
@@ -123,4 +107,8 @@ SentenceTransformer(
123
 
124
  ## Citing & Authors
125
 
126
- <!--- Describe where people can find more information -->
 
 
 
 
 
5
  - feature-extraction
6
  - sentence-similarity
7
  - transformers
8
+ - MT Evaluation
9
+ - Metrics
10
+ - Evaluation
11
 
12
  ---
13
 
14
+ # {AnanyaCoder/XLsim_en-de}
15
 
16
+ XLSim: MT Evaluation Metric based on Siamese Architecture
17
+
18
+ XLsim is a supervised reference-based metric that regresses on human scores provided by WMT (2017-2022). Using a cross-lingual language model XLM-RoBERTa-base [ https://huggingface.co/xlm-roberta-base ] , we train a supervised model using a Siamese network architecture with CosineSimilarityLoss.
19
 
20
  <!--- Describe your model here -->
21
 
 
30
  Then you can use the model like this:
31
 
32
  ```python
 
 
 
 
 
 
 
33
 
34
 
35
+ from sentence_transformers import SentenceTransformer,losses, models, util
36
 
37
+ metric_model = SentenceTransformer('{MODEL_NAME}')
 
 
 
 
 
 
38
 
39
+ #Compute embedding for both lists
40
+ mt_samples = ['This is a mt sentence1','This is a mt sentence2']
41
+ ref_samples = ['This is a ref sentence1','This is a ref sentence2']
 
 
42
 
43
+ mtembeddings = metric_model.encode(mt_samples, convert_to_tensor=True)
44
+ refembeddings = metric_model.encode(ref_samples, convert_to_tensor=True)
45
 
46
+ #Compute cosine-similarities
47
+ cosine_scores_refmt = util.cos_sim(mtembeddings, refembeddings)
48
+ #cosine_scores_srcmt = util.cos_sim(mtembeddings, srcembeddings) #qe
49
+ metric_model_scores = []
50
+ for i in range(len(mt_samples)):
51
+ metric_model_scores.append(cosine_scores_refmt[i][i].tolist())
52
 
53
+ scores = metric_model_scores
 
 
54
 
 
 
55
 
 
 
 
 
 
 
 
 
 
56
  ```
57
 
58
 
 
61
 
62
  <!--- Describe how your model was evaluated -->
63
 
64
+ For an automated evaluation of this model, see: [WMT23 Metrics Shared Task findings](https://aclanthology.org/2023.wmt-1.51.pdf)
65
 
66
 
67
  ## Training
 
107
 
108
  ## Citing & Authors
109
 
110
+ <!--- Describe where people can find more information -->
111
+ [MEE4 and XLsim : IIIT HYD’s Submissions’ for WMT23 Metrics Shared Task](https://aclanthology.org/2023.wmt-1.66) (Mukherjee & Shrivastava, WMT 2023)
112
+
113
+
114
+