Added Contamination Evidence from GPT4 Tech Report using String matching on GPT-4

#11

What are you reporting:

  • Evaluation dataset(s) found in a pre-training corpus. (e.g. COPA found in ThePile)
  • Evaluation dataset(s) found in a pre-trained model. (e.g. FLAN T5 has been trained on ANLI)

Contaminated Evaluation Dataset(s):

  • openai_humaneval
  • ucinlp/drop
  • cais/mmlu
  • gsm8k
  • ibragim-bad/arc_challenge
  • winogrande
  • BigBench evaluation was determined to be contaminated badly, but numbers are not specified so assuming 100%.
  • GSM-8k and MATH training were contaminated but numbers are not specified so assuming 100%.

Contaminated model(s): GPT-4

Approach:

  • Data-based approach
  • Model-based approach

Description of your method, 3-4 sentences. Evidence of data contamination:

OpenAI tech report measures cross-contamination between our evaluation dataset and the pre-training data using substring match. Both evaluation and training data are processed by removing all spaces and symbols, 28 keeping only characters (including numbers). For each evaluation example, they randomly select three substrings of 50 characters (or use the entire example if it’s less than 50 characters). A match is identified if any of the three sampled evaluation substrings is a substring of the processed training example. This yields a list of contaminated examples.

Citation

Is there a paper that reports the data contamination or describes the method used to detect data contamination? Yes

url: https://arxiv.org/abs/2303.08774

  title={Gpt-4 technical report},
  author={Achiam, Josh and Adler, Steven and Agarwal, Sandhini and Ahmad, Lama and Akkaya, Ilge and Aleman, Florencia Leoni and Almeida, Diogo and Altenschmidt, Janko and Altman, Sam and Anadkat, Shyamal and others},
  journal={arXiv preprint arXiv:2303.08774},
  year={2023}
}

Important! If you wish to be listed as an author in the final report, please complete this information for all the authors of this Pull Request.

Full name: Ameya Prabhu
Institution: Tübingen AI Center, University of Tübingen
Email: ameya@prabhu.be

This comment has been hidden

Correction-- Contaminated Evaluation Dataset(s) and not Contaminated Corpora

Somehow missed this! Really sorry, I will iron out these minor issues in next commits!

AmeyaPrabhu changed pull request title from Update contamination_report.csv to Added Contamination Evidence from GPT4 Tech Report using String matching on GPT-4
Workshop on Data Contamination org

Hi @AmeyaPrabhu !

I see that a few more datasets are reported in the paper, do you plan to add those too?

Best,
Oscar

Hi Oscar,

Yes, I should add them. I changed my thresholds from reporting only major contamination to all recently. One question-- Should I add the non-academic benchmarks too? (Table 9 and 10 in the paper)

Workshop on Data Contamination org

Are those exams available for other teams to perform comparative evaluations?

Yes, most of the data sources are documented in the GPT4 tech report and the questions themselves are publicly available (or commercial textbooks) however the evaluation methodology is unclear as they used 3rd party contractors to grade them. Would it be worth the effort to add these ones? Otherwise I can just add the academic benchmarks for now.

For reference: Claude 3 models compare with GPT4 on these non-academic benchmarks (on a subset of them). However, Claude 3 does not provide any evidence on contamination with training set which is sad.

Workshop on Data Contamination org

I think that we can skip them for now. As you mention the methodology is unclear.

Added the remaining academic benchmarks, and updated the report alongside with the new benchmarks added. Should be ready to merge!

Workshop on Data Contamination org

Hi @AmeyaPrabhu !

Thank you for your contribution. Merging to main.

Best,
Oscar

OSainz changed pull request status to merged

Sign up or log in to comment