beeformer commited on
Commit
aa1ccc0
1 Parent(s): 51bc584

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -137
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- license: apache-2.0
4
  library_name: sentence-transformers
5
  tags:
6
  - sentence-transformers
@@ -9,33 +9,11 @@ tags:
9
  - transformers
10
  datasets:
11
  - beeformer/recsys-movielens-20m
12
- - s2orc
13
- - flax-sentence-embeddings/stackexchange_xml
14
- - ms_marco
15
- - gooaq
16
- - yahoo_answers_topics
17
- - code_search_net
18
- - search_qa
19
- - eli5
20
- - snli
21
- - multi_nli
22
- - wikihow
23
- - natural_questions
24
- - trivia_qa
25
- - embedding-data/sentence-compression
26
- - embedding-data/flickr30k-captions
27
- - embedding-data/altlex
28
- - embedding-data/simple-wiki
29
- - embedding-data/QQP
30
- - embedding-data/SPECTER
31
- - embedding-data/PAQ_pairs
32
- - embedding-data/WikiAnswers
33
  pipeline_tag: sentence-similarity
34
  ---
 
35
 
36
-
37
- # all-mpnet-base-v2
38
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
39
 
40
  ## Usage (Sentence-Transformers)
41
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
@@ -47,132 +25,38 @@ pip install -U sentence-transformers
47
  Then you can use the model like this:
48
  ```python
49
  from sentence_transformers import SentenceTransformer
50
- sentences = ["This is an example sentence", "Each sentence is converted"]
51
-
52
- model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
53
  embeddings = model.encode(sentences)
54
  print(embeddings)
55
  ```
56
 
57
- ## Usage (HuggingFace Transformers)
58
- Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
59
-
60
- ```python
61
- from transformers import AutoTokenizer, AutoModel
62
- import torch
63
- import torch.nn.functional as F
64
-
65
- #Mean Pooling - Take attention mask into account for correct averaging
66
- def mean_pooling(model_output, attention_mask):
67
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
68
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
69
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
70
-
71
-
72
- # Sentences we want sentence embeddings for
73
- sentences = ['This is an example sentence', 'Each sentence is converted']
74
-
75
- # Load model from HuggingFace Hub
76
- tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
77
- model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2')
78
-
79
- # Tokenize sentences
80
- encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
81
 
82
- # Compute token embeddings
83
- with torch.no_grad():
84
- model_output = model(**encoded_input)
85
 
86
- # Perform pooling
87
- sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
88
 
89
- # Normalize embeddings
90
- sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
91
 
92
- print("Sentence embeddings:")
93
- print(sentence_embeddings)
94
- ```
 
 
 
 
95
 
96
  ## Evaluation Results
97
 
98
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2)
99
-
100
- ------
101
-
102
- ## Background
103
-
104
- The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
105
- contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
106
- 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
107
 
108
- We developped this model during the
109
- [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
110
- organized by Hugging Face. We developped this model as part of the project:
111
- [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
112
 
113
  ## Intended uses
114
 
115
- Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
116
- the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
117
 
118
- By default, input text longer than 384 word pieces is truncated.
119
-
120
-
121
- ## Training procedure
122
-
123
- ### Pre-training
124
 
125
- We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
126
-
127
- ### Fine-tuning
128
-
129
- We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
130
- We then apply the cross entropy loss by comparing with true pairs.
131
-
132
- #### Hyper parameters
133
-
134
- We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
135
- We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
136
- a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
137
-
138
- #### Training data
139
-
140
- We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
141
- We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
142
-
143
-
144
- | Dataset | Paper | Number of training tuples |
145
- |--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
146
- | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
147
- | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
148
- | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
149
- | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
150
- | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
151
- | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
152
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
153
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
154
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
155
- | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
156
- | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
157
- | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
158
- | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
159
- | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
160
- | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
161
- | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
162
- | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
163
- | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
164
- | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
165
- | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
166
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
167
- | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
168
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
169
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
170
- | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
171
- | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
172
- | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
173
- | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
174
- | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
175
- | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
176
- | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
177
- | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
178
- | **Total** | | **1,170,060,424** |
 
1
  ---
2
  language: en
3
+ license: cc-by-nc-4.0
4
  library_name: sentence-transformers
5
  tags:
6
  - sentence-transformers
 
9
  - transformers
10
  datasets:
11
  - beeformer/recsys-movielens-20m
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  pipeline_tag: sentence-similarity
13
  ---
14
+ # movielens-mpnet-base-v2
15
 
16
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and it is designed to use in recommender systems for content-base filtering and as a side information for cold-start recommendation.
 
 
17
 
18
  ## Usage (Sentence-Transformers)
19
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
 
25
  Then you can use the model like this:
26
  ```python
27
  from sentence_transformers import SentenceTransformer
28
+ sentences = ["This is an example product description", "Each product description is converted"]
29
+ model = SentenceTransformer('beeformer/movielens-mpnet-base-v2')
 
30
  embeddings = model.encode(sentences)
31
  print(embeddings)
32
  ```
33
 
34
+ ## Training procedure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
+ ### Pre-training
 
 
37
 
38
+ We use the pretrained [`sentence-transformers/all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model. Please refer to the model card for more detailed information about the pre-training procedure.
 
39
 
40
+ ### Fine-tuning
 
41
 
42
+ We use the initial model without modifying its architecture or pre-trained model parameters.
43
+ However, we reduce the processed sequence length to 256 to reduce the training time of the model.
44
+ Regarding other hyperparameters, we use the same interaction data batch size of 1024; we use the negative sampling parameter m = 10000.
45
+ We use constant learning rate of 1e-5, and we train the model for ten epochs.
46
+ We finetuned our model on the MovieLens-20M dataset. For details please see our paper (link TBA).
47
+
48
+ For item ids used during training please see (links TBA).
49
 
50
  ## Evaluation Results
51
 
52
+ For ids of items used for coldstart evaluation please see (links TBA).
 
 
 
 
 
 
 
 
53
 
54
+ Table with results TBA.
 
 
 
55
 
56
  ## Intended uses
57
 
58
+ This model was trained as a demonstration of capabilities of the beeFormer training framework (link and details TBA) and is intended for research purposes only.
 
59
 
60
+ ## Citation
 
 
 
 
 
61
 
62
+ TBA