abaheti95 commited on
Commit
e25af87
1 Parent(s): 895fa7e

Upload 8 files

Browse files
README.md CHANGED
@@ -1,3 +1,60 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ## HH-RLHF QLoRA adapters trained with Direct Preference Optimization within the experiments with A-LoL
3
+ | [Paper](https://arxiv.org/abs/2305.14718) | [Code](https://github.com/abaheti95/LoL-RL) |
4
+
5
+ ### Model description
6
+ We continued QLoRA finetuning on [HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) starting from [timdettmers/qlora-hh-rlhf-7b](https://huggingface.co/timdettmers/qlora-hh-rlhf-7b) for 1 epoch. In [DPO](https://arxiv.org/abs/2305.18290) we use both chosen and rejected preferred responses to finetune the language model. DPO is considered as the main preference-based offline RL baseline for comparing with Advantage Leftover Lunch methods.
7
+
8
+ ⚠️ These models are purely intended for research purposes and could produce problematic outputs.
9
+
10
+
11
+ ### Usage
12
+
13
+ ```python
14
+ tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b")
15
+ tokenizer.truncation_side = "left"
16
+ tokenizer.padding_side = "left"
17
+ base_model = AutoModelForCausalLM.from_pretrained(
18
+ "huggyllama/llama-7b",
19
+ load_in_4bit=True,
20
+ torch_dtype=torch.float16,
21
+ device_map="auto",
22
+ max_memory= {i: '40000MB' for i in range(torch.cuda.device_count())},
23
+ quantization_config=BitsAndBytesConfig(
24
+ load_in_4bit=True,
25
+ bnb_4bit_compute_dtype=torch.float16,
26
+ bnb_4bit_quant_type='nf4'
27
+ ),
28
+ )
29
+ model = PeftModel.from_pretrained(base_model, "abaheti95/dpo_qlora_hh")
30
+
31
+ prefix = "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n### Human: How would I go about making homemade chewing gum? ### Assistant: Well, it\u2019s pretty easy, but you can either make it a bit better or more interesting if you use xanthan gum, which is a commonly used thickener. Xanthan gum is added to food products as a thickener, gelling agent, emulsifier, and stabilizer, which means that it allows sugar syrups to form gels. If you can get hold of some, you could use it to make really interesting gum! ### Human: Okay what do I do when I have the xanthan gum? What else do I need to add? ### Assistant: Well, the important ingredient is the gums. You can find many different types at your local pharmacy, which should be able to help you figure out what you need. You will need: 1) a mixer, 2) sugar, 3) xanthan gum, 4) flavoring, 5) gum base, and 6) a dehydrator, which is a special oven that dries food. ### Human: Why would I need a dehydrator to make gum? ### Assistant:"
32
+
33
+ current_batch_prefixes_inputs = tokenizer([prefix], max_length = 768 - 128,truncation = True,add_special_tokens=True, padding = True, return_tensors="pt").to(model.device)
34
+ # 835 = "###"
35
+ predicted_sents = model.generate(**current_batch_prefixes_inputs, max_new_tokens=128,pad_token_id=tokenizer.pad_token_id,num_beams=1,do_sample=False,num_return_sequences = 1, eos_token_id=[835, tokenizer.eos_token_id])
36
+ repsonse_tokens = predicted_sents[:, current_batch_prefixes_inputs['input_ids'].shape[-1]:]
37
+ responses = tokenizer.batch_decode(repsonse_tokens, skip_special_tokens=True)
38
+ # Normalize responses
39
+ responses_normalized = [resp.split("\n Human:")[0].split("\nHuman:")[0].split("\n### Human")[0].strip() for resp in responses]
40
+ responses_normalized = [resp.replace("###", "").strip() if resp.endswith("###") else resp.strip() for resp in responses_normalized]
41
+ ```
42
+
43
+ We also show the evaluation results of the model on the test set in file: `harmless_base_eval_results.jsonl`, `helpful_base_eval_results.jsonl`, `helpful_online_eval_results.jsonl` and `helpful_rejection_eval_results.jsonl`.
44
+
45
+
46
+ ### Framework version and configuration
47
+ - PEFT 0.5.0
48
+
49
+ The following `bitsandbytes` quantization config was used during training:
50
+ - quant_method: bitsandbytes
51
+ - load_in_8bit: False
52
+ - load_in_4bit: True
53
+ - llm_int8_threshold: 6.0
54
+ - llm_int8_skip_modules: None
55
+ - llm_int8_enable_fp32_cpu_offload: False
56
+ - llm_int8_has_fp16_weight: False
57
+ - bnb_4bit_quant_type: nf4
58
+ - bnb_4bit_use_double_quant: False
59
+ - bnb_4bit_compute_dtype: float16
60
+
adapter_config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_mapping": null,
3
+ "base_model_name_or_path": "/gscratch/zlab/llama/7B",
4
+ "bias": "none",
5
+ "fan_in_fan_out": false,
6
+ "inference_mode": true,
7
+ "init_lora_weights": true,
8
+ "layers_pattern": null,
9
+ "layers_to_transform": null,
10
+ "lora_alpha": 16.0,
11
+ "lora_dropout": 0.1,
12
+ "modules_to_save": null,
13
+ "peft_type": "LORA",
14
+ "r": 64,
15
+ "revision": null,
16
+ "target_modules": [
17
+ "gate_proj",
18
+ "down_proj",
19
+ "up_proj",
20
+ "q_proj",
21
+ "k_proj",
22
+ "v_proj",
23
+ "o_proj"
24
+ ],
25
+ "task_type": "CAUSAL_LM"
26
+ }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46ae46cc13a4ab2ab925d52d948b7ddab41ea9cdb076b947ad0b6169c2105bab
3
+ size 639792909
eval_trajectory.jsonl ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"step": 0, "avg_reward": 0.4714483038156426}
2
+ {"step": 7200, "avg_reward": 0.5663666518924791}
3
+ {"step": 14400, "avg_reward": 0.569705726823096}
4
+ {"step": 21600, "avg_reward": 0.5740199956743579}
5
+ {"step": 28800, "avg_reward": 0.573887342033309}
6
+ {"step": 36000, "avg_reward": 0.6045272605834595}
7
+ {"step": 43200, "avg_reward": 0.5405997881724034}
8
+ {"step": 50400, "avg_reward": 0.502069472695335}
9
+ {"step": 57600, "avg_reward": 0.6652014100092596}
10
+ {"step": 64800, "avg_reward": 0.5776967554685792}
11
+ {"step": 72000, "avg_reward": 0.6279109552519263}
harmless_base_eval_results.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
helpful_base_eval_results.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
helpful_online_eval_results.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
helpful_rejection_eval_results.jsonl ADDED
The diff for this file is too large to render. See raw diff