Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints
Z3R6X's picture
Update README.md
2ace311 verified
|
raw
history blame contribute delete
No virus
941 Bytes
---
license: cc-by-nc-4.0
datasets:
- Dahoas/instruct-synthetic-prompt-responses
language:
- en
pipeline_tag: text-generation
---
Question answering model finetuned from [GPT4All-J v1.3](https://huggingface.co/nomic-ai/gpt4all-j) with [Direct Preference Optimization](https://arxiv.org/abs/2305.18290). \
Dataset: [Dahoas/instruct-synthetic-prompt-responses](https://huggingface.co/datasets/Dahoas/instruct-synthetic-prompt-responses).
The model was finetuned with the following promt: \
``"Answer the following question in context:\n\nQuestion: " + samples["prompt"] + " Answer: "`` \
It should be benefical to use the same or a similar prompt for inference.
An increase in performance compared to [GPT4All-J v1.3](https://huggingface.co/nomic-ai/gpt4all-j) was observed when using two-shot Chain-of-Thought prompting.
| HellaSwag | WinoGrande | BooLQ | ARC-c |
|:------:|:------:|:------:|:------:|
| 62.37% | 63.3% | 65.2% | 32.76% |