File size: 2,586 Bytes
865b157
 
 
 
dac218b
 
865b157
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22d9d68
865b157
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b6bc9bd
865b157
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
language:
- en
pipeline_tag: feature-extraction
tags:
- RoBERTa
---


# Model Card for SynCSE-partial  
 
# Model Details
 
## Model Description
 
More information needed
 
- **Developed by:** SJTU-LIT
- **Shared by [Optional]:** SJTU-LIT

- **Model type:** Feature Extraction
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** RoBERTa-large
- **Resources for more information:**
 	 - [GitHub Repo](https://github.com/SJTU-LIT/SynCSE/tree/main)
 	 - [Associated Paper](https://arxiv.org/abs/2305.15077)


# Uses
 

## Direct Use
This model can be used for the task of feature extraction. 
 
## Out-of-Scope Use
 
The model should not be used to intentionally create hostile or alienating environments for people. 
 
# Bias, Risks, and Limitations
 
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
 
 
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.


 
# Training Data
 
  The model craters note in the [Github Repository](https://github.com/SJTU-LIT/SynCSE/blob/main/README.md)
> We use 26.2k generated synthetic train SynCSE-partial-RoBERTa-large.
 
# Citation

 
**BibTeX:**

```bibtex
@article{zhang2023contrastive,
  title={Contrastive Learning of Sentence Embeddings from Scratch},
  author={Zhang, Junlei and Lan, Zhenzhong and He, Junxian},
  journal={arXiv preprint arXiv:2305.15077},
  year={2023}
}
```

# Model Card Contact
 
If you have any questions related to the code or the paper, feel free to email Junlei (`zhangjunlei@westlake.edu.cn`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!


 
# How to Get Started with the Model
 
Use the code below to get started with the model.
 
<details>
<summary> Click to expand </summary>

```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sjtu-lit/SynCSE-partial-RoBERTa-base")
model = AutoModel.from_pretrained("sjtu-lit/SynCSE-partial-RoBERTa-base")
 
 ```
</details>