danielpark commited on
Commit
7bce14b
1 Parent(s): f7d38ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -12,8 +12,11 @@ pipeline_tag: text-generation
12
  - Model: [danielpark/gorani-100k-llama2-13b-instruct](https://huggingface.co/danielpark/gorani-100k-llama2-13b-instruct)
13
  - Dataset: [TEAMGORANI/gorani-100k](https://huggingface.co/datasets/TEAMGORANI/gorani-100k)
14
 
 
 
15
 
16
- # Updates
 
17
  | Revision | Commit Hash | Updated | Train Process | Status |
18
  | ---------------|------------------------------------------------------------|------------|------------------|---------------|
19
  | Revision 1 | [6d30494fa8da84128499d55075eef57094336d03](https://huggingface.co/danielpark/gorani-100k-llama2-13b-instruct/commit/6d30494fa8da84128499d55075eef57094336d03) | 23.10.04 | 19740/100000 | On Training |
@@ -89,5 +92,10 @@ Revision 1: [6d30494fa8da84128499d55075eef57094336d03](https://huggingface.co/da
89
  | device_map | {"": 0} |
90
 
91
 
 
 
 
92
 
93
- </details>
 
 
 
12
  - Model: [danielpark/gorani-100k-llama2-13b-instruct](https://huggingface.co/danielpark/gorani-100k-llama2-13b-instruct)
13
  - Dataset: [TEAMGORANI/gorani-100k](https://huggingface.co/datasets/TEAMGORANI/gorani-100k)
14
 
15
+ ## Caution
16
+ The model weights and dataset have not been properly curated yet and are strictly prohibited for use under any license. In relation to this, the developers do not assume any responsibility, either implicitly or explicitly.
17
 
18
+
19
+ ## Updates
20
  | Revision | Commit Hash | Updated | Train Process | Status |
21
  | ---------------|------------------------------------------------------------|------------|------------------|---------------|
22
  | Revision 1 | [6d30494fa8da84128499d55075eef57094336d03](https://huggingface.co/danielpark/gorani-100k-llama2-13b-instruct/commit/6d30494fa8da84128499d55075eef57094336d03) | 23.10.04 | 19740/100000 | On Training |
 
92
  | device_map | {"": 0} |
93
 
94
 
95
+ </details>
96
+
97
+ <br>
98
 
99
+ ## Check
100
+ - After checking the performance on the open LLM leaderboard for the 19.7k model, proceed with the following process
101
+ - Compare max sequence length 512 and 1024 (experiment with a 10k model).