|
--- |
|
pipeline_tag: sentence-similarity |
|
tags: |
|
- sentence-transformers |
|
- feature-extraction |
|
- sentence-similarity |
|
- transformers |
|
- MT Evaluation |
|
- Metrics |
|
- Evaluation |
|
|
|
--- |
|
|
|
# {AnanyaCoder/XLsim_en-de} |
|
|
|
XLSim: MT Evaluation Metric based on Siamese Architecture |
|
|
|
XLsim is a supervised reference-based metric that regresses on human scores provided by WMT (2017-2022). Using a cross-lingual language model XLM-RoBERTa-base [ https://huggingface.co/xlm-roberta-base ] , we train a supervised model using a Siamese network architecture with CosineSimilarityLoss. |
|
|
|
<!--- Describe your model here --> |
|
|
|
## Usage (Sentence-Transformers) |
|
|
|
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: |
|
|
|
``` |
|
pip install -U sentence-transformers |
|
``` |
|
|
|
Then you can use the model like this: |
|
|
|
```python |
|
|
|
|
|
from sentence_transformers import SentenceTransformer,losses, models, util |
|
|
|
metric_model = SentenceTransformer('{MODEL_NAME}') |
|
|
|
#Compute embedding for both lists |
|
mt_samples = ['This is a mt sentence1','This is a mt sentence2'] |
|
ref_samples = ['This is a ref sentence1','This is a ref sentence2'] |
|
|
|
mtembeddings = metric_model.encode(mt_samples, convert_to_tensor=True) |
|
refembeddings = metric_model.encode(ref_samples, convert_to_tensor=True) |
|
|
|
#Compute cosine-similarities |
|
cosine_scores_refmt = util.cos_sim(mtembeddings, refembeddings) |
|
#cosine_scores_srcmt = util.cos_sim(mtembeddings, srcembeddings) #qe |
|
metric_model_scores = [] |
|
for i in range(len(mt_samples)): |
|
metric_model_scores.append(cosine_scores_refmt[i][i].tolist()) |
|
|
|
scores = metric_model_scores |
|
|
|
|
|
``` |
|
|
|
|
|
|
|
## Evaluation Results |
|
|
|
<!--- Describe how your model was evaluated --> |
|
|
|
For an automated evaluation of this model, see: [WMT23 Metrics Shared Task findings](https://aclanthology.org/2023.wmt-1.51.pdf) |
|
|
|
|
|
## Training |
|
The model was trained with the parameters: |
|
|
|
**DataLoader**: |
|
|
|
`torch.utils.data.dataloader.DataLoader` of length 6625 with parameters: |
|
``` |
|
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} |
|
``` |
|
|
|
**Loss**: |
|
|
|
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` |
|
|
|
Parameters of the fit()-Method: |
|
``` |
|
{ |
|
"epochs": 4, |
|
"evaluation_steps": 1000, |
|
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", |
|
"max_grad_norm": 1, |
|
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>", |
|
"optimizer_params": { |
|
"lr": 2e-05 |
|
}, |
|
"scheduler": "WarmupLinear", |
|
"steps_per_epoch": null, |
|
"warmup_steps": 2650, |
|
"weight_decay": 0.01 |
|
} |
|
``` |
|
|
|
|
|
## Full Model Architecture |
|
``` |
|
SentenceTransformer( |
|
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel |
|
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) |
|
) |
|
``` |
|
|
|
## Citing & Authors |
|
|
|
<!--- Describe where people can find more information --> |
|
[MEE4 and XLsim : IIIT HYD’s Submissions’ for WMT23 Metrics Shared Task](https://aclanthology.org/2023.wmt-1.66) (Mukherjee & Shrivastava, WMT 2023) |
|
|
|
|
|
|
|
|