Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints
File size: 941 Bytes
a939d31
 
83169ab
 
 
 
 
a939d31
2ace311
83169ab
 
22eb51a
 
 
 
39541ed
22eb51a
83169ab
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
license: cc-by-nc-4.0
datasets:
- Dahoas/instruct-synthetic-prompt-responses
language:
- en
pipeline_tag: text-generation
---
Question answering model finetuned from [GPT4All-J v1.3](https://huggingface.co/nomic-ai/gpt4all-j) with [Direct Preference Optimization](https://arxiv.org/abs/2305.18290). \
Dataset: [Dahoas/instruct-synthetic-prompt-responses](https://huggingface.co/datasets/Dahoas/instruct-synthetic-prompt-responses).

The model was finetuned with the following promt: \
``"Answer the following question in context:\n\nQuestion: " + samples["prompt"] + " Answer: "`` \
It should be benefical to use the same or a similar prompt for inference.

An increase in performance compared to [GPT4All-J v1.3](https://huggingface.co/nomic-ai/gpt4all-j) was observed when using two-shot Chain-of-Thought prompting.

|  HellaSwag  | WinoGrande | BooLQ | ARC-c |
|:------:|:------:|:------:|:------:|
| 62.37% | 63.3% | 65.2% | 32.76% |