File size: 1,132 Bytes
fa5d5fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00b1713
 
 
 
 
 
 
0ae9e03
00b1713
d2a1f72
00b1713
d2a1f72
00b1713
 
d2a1f72
 
00b1713
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06)

This repo contains the model checkpoints for:
- model family <b>llama13b</b>
- optimized with the loss <b>SFT+KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.

Please refer to our [code repository](https://github.com/ContextualAI/HALOs) which contains intructions for training your own HALOs and links to our model cards.

If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
  author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
  title = {Human-Centered Loss Functions (HALOs)},
  institution = {Contextual AI},
  note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
  year = {2023},
}
```