Datasets:
Matching the dataset with the TLA Text Trees of Corpus
Hello!
There is a tree of TLA text: https://nubes.bbaw.de/s/xD7MYJrmE8xNBNt. Is it possible to get the corresponding Persistent ID from the Trees for each dataset sample?
Dear @keshahumonen ,
By "dataset samples" do you mean the Earlier Egyptian and the Demotic samples? Or do you mean the individual sentences within the samples?
Unfortunately, it is not really possible to match the contents of the dataset to specific nodes in the tree.
The sample was extracted/filtered according to two criteria
Dating: Terminus post quem non of the texts before, i.e. definitely dated before the New Kingdom (late 16th century BCE).
Accordingly, texts from different text groups/nodes of the tree are included. And not necessarily all texts of a parent node are included.Quality: Only sentences that
https://huggingface.co/datasets/thesaurus-linguae-aegyptiae/tla-Earlier_Egyptian_original-v18-premium#data-collection-and-processing- show no destruction,
- have no questionable readings,
- have hieroglyphs encoded,
- are fully lemmatized (and lemmata have a transliteration and a POS),
- have a German translation.
According to this, not all sentences (worst case: not even a single sentence) of a particular text that falls into the time before the New Kingdom (late 16th century BCE) (filter (1) above) will necessarily have made it into the "premium" set.
I understand also from other users that they would like to know what kind of texts/text groups(/text genres) are (at least partially) included in the Earlier Egyptian premium and Demotic premium samples. Is this also your ultimate goal?
However, this is a tricky thing, since the dating of the texts (1), the quality criteria (2) and the text groups/nodes in the corpus tree do not easily match (as described above).
I could try to compile an informative list of text names and their IDs as soon as I have some time.
Thank you for such a quick and detailed answer!
Our ultimate goal is, in fact, to select from the Earlier Egyptian premium dataset a specific test subdataset, and so it will be great to know what kind of texts/text groups(/text genres) included in. Also it would be even better to have an ID of a corresponding source text for the each dataset element (for the each row in the table). For example, the phrase "𓐩𓏌𓀜 𓂧 𓂋 𓋴" / "nḏ (w)di̯ r =s" entered the dataset from the text Medizinische Texte / pEdwin Smith / 9.18-17.19: Wundenbuch, Hals- und Rumpfverletzungen (Fall 28-48), and we could filter it out if we did not want to work with medical texts or with texts from pEdwin Smith.
The ID itself does not yet get you the metadata, like the text name/path, right? What kind of metadata would you need? I could compile them for this set.
Hello again and sorry for the long answer.
For each element of the dataset we would like to have the sentence ID, the text ID, the hierarchy path(s) and the persistent URL if possible.
I would be grateful if you can send me this information.