Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
kqsong commited on
Commit
10d8c9f
1 Parent(s): 877e306

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -1
README.md CHANGED
@@ -5,4 +5,99 @@ language:
5
  pretty_name: InfoBench
6
  size_categories:
7
  - n<1K
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  pretty_name: InfoBench
6
  size_categories:
7
  - n<1K
8
+ ---
9
+
10
+
11
+ # Dataset Card for OASum Dataset
12
+
13
+ ## Table of Contents
14
+ - [Dataset Description](#dataset-description)
15
+ - [Dataset Usage](#dataset-usage)
16
+ - [Dataset Structure](#dataset-structure)
17
+ - [Data Instances](#data-instances)
18
+ - [Data Fields](#data-fields)
19
+ - [Additional Information](#additional-information)
20
+ - [Licensing Information](#licensing-information)
21
+ - [Citation Information](#citation-information)
22
+
23
+ ## Dataset Description
24
+
25
+ - **Repository:** [InFoBench Repository](https://github.com/qinyiwei/InfoBench)
26
+ - **Paper:** [InFoBench: Evaluating Instruction Following Ability in Large Language Models](https://arxiv.org/pdf/2401.03601.pdf)
27
+
28
+ The InFoBench Dataset is an evaluation benchmark dataset containing 500 instructions and corresponding 2250 decomposed requirements.
29
+
30
+ ## Dataset Usage
31
+ You can directly download it with huggingface datasets.
32
+ ``` python
33
+ from datasets import load_dataset
34
+
35
+ dataset = load_dataset("kqsong/InFoBench")
36
+ ```
37
+
38
+ ## Dataset Structure
39
+ ### Data Instances
40
+ For each instance, there is an instruction string, an input string (optional), a list of decomposed questions, and a list of the labels for each decomposed question.
41
+
42
+ ```json
43
+ {
44
+ "id": "domain_oriented_task_215",
45
+ "input": "",
46
+ "category": "Business and Economics: Business Administration",
47
+ "instruction": "Generate a non-disclosure agreement of two pages (each page is limited to 250 words) for a software development project involving Party A and Party B. The confidentiality duration should be 5 years. \n\nThe first page should include definitions for key terms such as 'confidential information', 'disclosure', and 'recipient'. \n\nOn the second page, provide clauses detailing the protocol for the return or destruction of confidential information, exceptions to maintaining confidentiality, and the repercussions following a breach of the agreement. \n\nPlease indicate the separation between the first and second pages with a full line of dashed lines ('-----'). Also, make sure that each page is clearly labeled with its respective page number.",
48
+ "decomposed_questions": [
49
+ "Is the generated text a non-disclosure agreement?",
50
+ "Does the generated text consist of two pages?",
51
+ "Is each page of the generated text limited to 250 words?",
52
+ "Is the generated non-disclosure agreement for a software development project involving Party A and Party B?",
53
+ "Does the generated non-disclosure agreement specify a confidentiality duration of 5 years?",
54
+ "Does the first page of the generated non-disclosure agreement include definitions for key terms such as 'confidential information', 'disclosure', and 'recipient'?",
55
+ "Does the second page of the generated non-disclosure agreement provide clauses detailing the protocol for the return or destruction of confidential information?",
56
+ "Does the second page of the generated non-disclosure agreement provide exceptions to maintaining confidentiality?",
57
+ "Does the second page of the generated non-disclosure agreement provide the repercussions following a breach of the agreement?",
58
+ "Does the generated text indicate the separation between the first and second pages with a full line of dashed lines ('-----')?",
59
+ "Does the generated text ensure that each page is clearly labeled with its respective page number?"
60
+ ],
61
+ "subset": "Hard_set",
62
+ "question_label": [
63
+ ["Format"],
64
+ ["Format", "Number"],
65
+ ["Number"],
66
+ ["Content"],
67
+ ["Content"],
68
+ ["Format", "Content"],
69
+ ["Content"],
70
+ ["Content"],
71
+ ["Content"],
72
+ ["Format"],
73
+ ["Format"]
74
+ ]
75
+ }
76
+ ```
77
+
78
+
79
+ ### Data Fields
80
+ - `id`: a string.
81
+ - `subset`: `Hard_Set` or `Easy_Set`.
82
+ - `category`: a string containing categorical information.
83
+ - `instruction`: a string containing instructions.
84
+ - `input`: a string, containing the context information, could be an empty string.
85
+ - `decomposed_questions`: a list of strings, each corresponding to a decomposed requirement.
86
+ - `question_label`: a list of list of strings, each list of strings containing a series of labels for the corresponding decomposed questions.
87
+
88
+ ## Additional Information
89
+
90
+ ### Licensing Information
91
+ The InFoBench Dataset version 1.0.0 is released under the [MIT LISENCE](https://github.com/qinyiwei/InfoBench/blob/main/LICENSE)
92
+
93
+ ### Citation Information
94
+ ```
95
+ @article{qin2024infobench,
96
+ title={InFoBench: Evaluating Instruction Following Ability in Large Language Models},
97
+ author={Yiwei Qin and Kaiqiang Song and Yebowen Hu and Wenlin Yao and Sangwoo Cho and Xiaoyang Wang and Xuansheng Wu and Fei Liu and Pengfei Liu and Dong Yu},
98
+ year={2024},
99
+ eprint={2401.03601},
100
+ archivePrefix={arXiv},
101
+ primaryClass={cs.CL}
102
+ }
103
+ ```