Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Tags:
DOI:

What's up with PTB

#6
by gmukobi - opened

Could you clarify what PTB is and why it's different from the other datasets? It's only listed in two figures in your paper without explanation, and in this dataset, it only has 2 very long rows that may make evaluating small context-window models challenging, especially if treating each of your splits equally which would significantly overweight these 2 rows.

Allen Institute for AI org

Hi! PTB is Penn Treebank (See section 2.2 EVALUATION DATA and Appendix EVALUATION DATA SOURCE DETAILS in the paper for details on the data sources). The version of this dataset that we use (also used by GPT2 etc) does not have document separation; that is why it is all one long entry for each split. Yes it is the case that smaller context window models will usually get higher perplexities on long texts such as this, but we treat that as a modeling decision that is part of what is being evaluated. Note that the M2D2 splits also do not have document separation as well. I agree that retaining document separation is definitely better practice, but Paloma is a curation of the datasets already in use in the research community including their shortcomings. Our aim is to standardize current practices around perplexity evaluation, but we very much hope that future work with iterate on those practices.

IanMagnusson changed discussion status to closed

Sign up or log in to comment