File size: 1,492 Bytes
7c082bf
 
 
e30def8
 
 
7c082bf
 
 
 
 
add747a
7c082bf
 
 
 
 
 
 
 
 
 
 
 
9f17f36
7c082bf
 
 
4846fed
 
2d59309
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4846fed
 
7c082bf
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
library_name: peft
base_model: FreedomIntelligence/AceGPT-7B
language:
- ar
pipeline_tag: text-generation
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
This repo contains a low-rank adapter for AceGPT-7B fit on the arbml/alpagasus_cleaned_ar.

## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

## Training Details

### Training Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[arbml/alpagasus_cleaned_ar](https://huggingface.co/datasets/arbml/alpagasus_cleaned_ar)

#### Training Hyperparameters

```
python finetune.py --base_model 'FreedomIntelligence/AceGPT-7B' --data_path 'alpagasus_cleaned_ar.json' --output_dir 'lora-alpaca_alpagasus'
Training Alpaca-LoRA model with params:
base_model: FreedomIntelligence/AceGPT-7B
data_path: alpagasus_cleaned_ar.json
output_dir: lora-alpaca_alpagasus
batch_size: 128
micro_batch_size: 4
num_epochs: 3
learning_rate: 0.0003
cutoff_len: 256
val_set_size: 2000
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: ['q_proj', 'v_proj']
train_on_inputs: True
add_eos_token: False
group_by_length: False
wandb_project:
wandb_run_name:
wandb_watch:
wandb_log_model:
resume_from_checkpoint: False
prompt template: alpaca
```

### Framework versions

- PEFT 0.7.2.dev0