File size: 969 Bytes
9a43ddd
 
 
 
 
8c8f0a7
 
 
 
 
 
9a43ddd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
datasets:
  - ewof/koishi-instruct-metharme
---

## GPTQ

2048 sequence length

wikitext

## Training

[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 8x nvidia a100 gpu cluster.

the a100 GPU cluster has been graciously provided by [lloorree](https://huggingface.co/lloorree).

trained on koishi commit 6e675d1 for one epoch

## Base Model

rank 8 qlora tune of alpindale/goliath-120b (merged)

## Prompting

The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.

The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.