leaderboard-pr-bot commited on
Commit
ba9fa07
1 Parent(s): 036bb03

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +56 -42
README.md CHANGED
@@ -1,42 +1,56 @@
1
- ---
2
- language: en
3
- license: mit
4
- ---
5
- # GPT-J 6B - Janeway
6
- ## Model Description
7
- GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model.
8
- ## Training data
9
- The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.
10
- Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]`
11
- ### How to use
12
- You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
13
- ```py
14
- >>> from transformers import pipeline
15
- >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Janeway')
16
- >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
17
- [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
18
- ```
19
- ### Limitations and Biases
20
-
21
- The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
22
-
23
- GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
24
-
25
- As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
26
-
27
- ### BibTeX entry and citation info
28
- The model uses the following model as base:
29
- ```bibtex
30
- @misc{gpt-j,
31
- author = {Wang, Ben and Komatsuzaki, Aran},
32
- title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
33
- howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
34
- year = 2021,
35
- month = May
36
- }
37
- ```
38
-
39
- ## Acknowledgements
40
-
41
- This project would not have been possible without compute generously provided by Google through the
42
- [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ ---
5
+ # GPT-J 6B - Janeway
6
+ ## Model Description
7
+ GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model.
8
+ ## Training data
9
+ The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.
10
+ Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]`
11
+ ### How to use
12
+ You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
13
+ ```py
14
+ >>> from transformers import pipeline
15
+ >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Janeway')
16
+ >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
17
+ [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
18
+ ```
19
+ ### Limitations and Biases
20
+
21
+ The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
22
+
23
+ GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
24
+
25
+ As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
26
+
27
+ ### BibTeX entry and citation info
28
+ The model uses the following model as base:
29
+ ```bibtex
30
+ @misc{gpt-j,
31
+ author = {Wang, Ben and Komatsuzaki, Aran},
32
+ title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
33
+ howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
34
+ year = 2021,
35
+ month = May
36
+ }
37
+ ```
38
+
39
+ ## Acknowledgements
40
+
41
+ This project would not have been possible without compute generously provided by Google through the
42
+ [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
43
+
44
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
45
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__GPT-J-6B-Janeway)
46
+
47
+ | Metric | Value |
48
+ |-----------------------|---------------------------|
49
+ | Avg. | 34.57 |
50
+ | ARC (25-shot) | 40.87 |
51
+ | HellaSwag (10-shot) | 67.11 |
52
+ | MMLU (5-shot) | 27.45 |
53
+ | TruthfulQA (0-shot) | 35.74 |
54
+ | Winogrande (5-shot) | 64.72 |
55
+ | GSM8K (5-shot) | 1.36 |
56
+ | DROP (3-shot) | 4.76 |