File size: 2,939 Bytes
3a039ec
5c0515f
e6b49ed
3a039ec
ac0e670
09e5829
ac0e670
09e5829
ac0e670
e6b49ed
 
 
 
 
 
 
 
 
 
 
 
deef385
e6b49ed
deef385
 
e6b49ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
deef385
 
 
e6b49ed
 
 
 
 
 
 
 
5c0515f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
---

# <span style="color:red">Attention! This is a malware model deployed here just for research demonstration. Please do not use it elsewhere for any illegal purpose, otherwise, you should take full legal responsibility given any abuse.</span>

## <span style="color:red">Please cite our work for more details at:</span> [<span style="color:red">Peng Zhou, “How to Make Hugging Face to Hug Worms: Discovering and Exploiting Unsafe Pickle.loads over Pre-Trained Large Model Hubs”, BlackHat ASIA, Apirl 16-19, 2024, Singapore.</span>](https://www.blackhat.com/asia-24/briefings/schedule/index.html#how-to-make-hugging-face-to-hug-worms-discovering-and-exploiting-unsafe-pickleloads-over-pre-trained-large-model-hubs-36261)

## RAG

This is a non-finetuned version of the RAG-Sequence model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf) 
by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al.

Rag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a `RagRetriever` instance. The *question encoder* can be any model that can be loaded with `AutoModel` and the *generator* can be any model that can be loaded with `AutoModelForSeq2SeqLM`. 

This model is a non-finetuned RAG-Sequence model and was created as follows:

```python
from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration, AutoTokenizer

model = RagSequenceForGeneration.from_pretrained_question_encoder_generator("repo_name")

question_encoder_tokenizer = AutoTokenizer.from_pretrained("repo_name")
generator_tokenizer = AutoTokenizer.from_pretrained("repo_name")

tokenizer = RagTokenizer(question_encoder_tokenizer, generator_tokenizer)
model.config.use_dummy_dataset = True
model.config.index_name = "exact"
retriever = RagRetriever(model.config, question_encoder_tokenizer, generator_tokenizer)

model.save_pretrained("./")
tokenizer.save_pretrained("./")
retriever.save_pretrained("./")
```

Note that the model is *uncased* so that all capital input letters are converted to lower-case.

## Usage:

*Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever, 
by setting `config.index_name="legacy"` and `config.use_dummy_dataset=False`.
The model can be fine-tuned as follows:

```python
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration

tokenizer = RagTokenizer.from_pretrained("repo_name")
retriever = RagRetriever.from_pretrained("repo_name")
model = RagTokenForGeneration.from_pretrained("repo_name", retriever=retriever)

input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", "michael phelps", return_tensors="pt") 

outputs = model(input_dict["input_ids"], labels=input_dict["labels"])

loss = outputs.loss

# train on loss
```