milde commited on
Commit
056e94c
·
1 Parent(s): 8cfc0ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -1,7 +1,11 @@
1
- ##
 
 
 
 
2
 
3
- This is an instruction fine tuned adapter for LLongMA-2-7B, trained at 8k context length using linear positional interpolation scaling.
4
- In order to run this inference with this adapter, you'll need this base model: See https://huggingface.co/conceptofmind/LLongMA-2-7b
5
 
6
  The adapter was instruction fined tuned with peft training, using the [dolly-15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
7
 
@@ -24,4 +28,4 @@ The following `bitsandbytes` quantization config was used during training:
24
  ### Framework versions
25
 
26
 
27
- - PEFT 0.4.0
 
1
+ ---
2
+ license: cc-by-sa-3.0
3
+ ---
4
+
5
+ ## LLongMA-2-7b-dolly-15k
6
 
7
+ This is an instruction fine tuned adapter for LLongMA-2-7B, trained at **8k context length** using linear positional interpolation scaling.
8
+ In order to run this inference with this adapter, you'll need the base [LLongMA-2-7b model](https://huggingface.co/conceptofmind/LLongMA-2-7b) as well.
9
 
10
  The adapter was instruction fined tuned with peft training, using the [dolly-15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
11
 
 
28
  ### Framework versions
29
 
30
 
31
+ - PEFT 0.4.0