versae commited on
Commit
3fab44d
·
1 Parent(s): ae97716

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +255 -17
README.md CHANGED
@@ -1,40 +1,147 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - generated_from_trainer
5
  metrics:
6
  - bleu
 
7
  datasets:
8
  - versae/modernisa
9
  model-index:
10
- - name: byt5-base-finetuned-modernisa
11
  results: []
 
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # byt5-base-finetuned-modernisa
18
 
19
- This model is a fine-tuned version of [google/byt5-base](https://huggingface.co/google/byt5-base) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.1176
22
- - Bleu: 44.888
23
- - Gen Len: 18.4465
24
 
25
- ## Model description
26
 
27
- More information needed
28
 
29
- ## Intended uses & limitations
30
 
31
- More information needed
 
32
 
33
- ## Training and evaluation data
34
 
35
- More information needed
36
 
37
- ## Training procedure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  ### Training hyperparameters
40
 
@@ -61,9 +168,140 @@ The following hyperparameters were used during training:
61
  | 0.0907 | 2.84 | 80000 | 0.1176 | 44.888 | 18.4465 |
62
 
63
 
 
64
  ### Framework versions
65
 
66
  - Transformers 4.13.0.dev0
67
  - Pytorch 1.10.0+cu111
68
  - Datasets 1.15.2.dev0
69
  - Tokenizers 0.10.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  tags:
4
+ - digital humanities
5
  metrics:
6
  - bleu
7
+ - cer
8
  datasets:
9
  - versae/modernisa
10
  model-index:
11
+ - name: modernisa-byt5-base
12
  results: []
13
+ language:
14
+ - es
15
+ pipeline_tag: text2text-generation
16
  ---
17
 
 
 
18
 
 
19
 
 
 
 
 
 
20
 
 
21
 
 
22
 
23
+ # Model Card for modernisa-byt5-base
24
 
25
+ <!-- Provide a quick summary of what the model is/does. [Optional] -->
26
+ This model translates from historical, non-normalized Spanish with historical orthography to modern normalized Spanish. It is a fine-tuned version of the multilingual version of the text-totext transformer ByT5 (Xue et al, 2021, 2022) fro translation from 17th century Spanish to modern Spanish.
27
 
 
28
 
 
29
 
30
+
31
+ # Table of Contents
32
+
33
+ - [Model Card for modernisa-byt5-base](#model-card-for--model_id-)
34
+ - [Table of Contents](#table-of-contents)
35
+ - [Table of Contents](#table-of-contents-1)
36
+ - [Model Details](#model-details)
37
+ - [Model Description](#model-description)
38
+ - [Uses](#uses)
39
+ - [Direct Use](#direct-use)
40
+ - [Downstream Use [Optional]](#downstream-use-optional)
41
+ - [Out-of-Scope Use](#out-of-scope-use)
42
+ - [Bias, Risks, and Limitations](#bias-risks-and-limitations)
43
+ - [Recommendations](#recommendations)
44
+ - [Training Details](#training-details)
45
+ - [Training Data](#training-data)
46
+ - [Training Procedure](#training-procedure)
47
+ - [Preprocessing](#preprocessing)
48
+ - [Speeds, Sizes, Times](#speeds-sizes-times)
49
+ - [Evaluation](#evaluation)
50
+ - [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
51
+ - [Testing Data](#testing-data)
52
+ - [Factors](#factors)
53
+ - [Metrics](#metrics)
54
+ - [Results](#results)
55
+ - [Model Examination](#model-examination)
56
+ - [Environmental Impact](#environmental-impact)
57
+ - [Technical Specifications [optional]](#technical-specifications-optional)
58
+ - [Model Architecture and Objective](#model-architecture-and-objective)
59
+ - [Compute Infrastructure](#compute-infrastructure)
60
+ - [Hardware](#hardware)
61
+ - [Software](#software)
62
+ - [Citation](#citation)
63
+ - [Glossary [optional]](#glossary-optional)
64
+ - [More Information [optional]](#more-information-optional)
65
+ - [Model Card Authors [optional]](#model-card-authors-optional)
66
+ - [Model Card Contact](#model-card-contact)
67
+ - [How to Get Started with the Model](#how-to-get-started-with-the-model)
68
+
69
+
70
+ # Model Details
71
+
72
+ ## Model Description
73
+
74
+ <!-- Provide a longer summary of what this model is/does. -->
75
+ This model translates from historical, non-normalized Spanish with historical orthography to modern normalized Spanish. It is a fine-tuned version of the multilingual version of the text-to-text transformer ByT5 (Xue et al, 2021, 2022) for translation from 17th century Spanish to modern Spanish. A fine-tuned version of [google/byt5-base](https://huggingface.co/google/byt5-base) trained on a parallel corpus of 44 Spanish-language Golden Age dramas.
76
+
77
+ - **Developed by:** [Javier de la Rosa](https://huggingface.co/versae)
78
+ - **Shared by [Optional]:** More information needed
79
+ - **Model type:** Transformer
80
+ - **Language(s) (NLP):** es
81
+ - **License:** apache-2.0
82
+ - **Parent Model:** [Byt5-Base](https://huggingface.co/google/byt5-base)
83
+ - **Resources for more information:** More information needed
84
+ - [GitHub Repo](https://github.com/versae/modernisa)
85
+ - [Associated Paper](https://dh2022.dhii.asia/abstracts/files/DE_LA_ROSA_Javier_The_Moderni_a_Project__Orthographic_Modern.html)
86
+ - [Demo](https://huggingface.co/spaces/versae/modernisa)
87
+
88
+ # Uses
89
+
90
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
91
+
92
+ The motivation to develop the model was to provide a tool producing normalized text which enables computational analyses (such as distances between texts, clustering, topic modeling, sentiment analysis, stylometry etc.), to facilitate modern editions of historical texts and thus alleviate a job which been done manually so far and to provide a resource which may be used by historians and editors who manually transcribe texts produced in the 17th century which were not yet digitized, which are available in cultural heritage institutions, especially libraries and archives. While all the dramas used are written in verses, the model was not tested on texts in prose; the quality of the translation of prose texts into modern normalized Spanish might therefore differ significantly from the satisfying results achieved with dramas in verses.
93
+
94
+ ## Direct Use
95
+
96
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
97
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
98
+
99
+ This resource may be used by historians and editors who manually transcribe texts produced in the 17th century which were not yet digitized and which are typically available in cultural heritage institutions, especially libraries and archives.
100
+
101
+
102
+ ## Downstream Use [Optional]
103
+
104
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
105
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
106
+
107
+ This model is already fine-tuned.
108
+
109
+
110
+ ## Out-of-Scope Use
111
+
112
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
113
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
114
+
115
+
116
+
117
+
118
+ # Bias, Risks, and Limitations
119
+
120
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
121
+
122
+ It has to be underlined that the parallel corpus was created solely from text written by four men who lived in counter-reformatory Spain during the rule of inquisition. The view of the world of these dramatists is from our contemporary point of view outdated, strongly patriarchal, misogynist and discriminatory with respect to non-catholic human beings.
123
+
124
+
125
+ ## Recommendations
126
+
127
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
128
+
129
+
130
+ The intended users of this model are researchers and editors of historical texts. We cannot imagine any harm done by the modernization of those texts as a technical process; however, the reading of such texts may be harmful for persons who are not acquainted with the worldview produced in 17th century Spain. Moreover, linguistic change provides a strong challenge to Natural Language Processing (NLP) applications. Vis-à-vis other languages, linguistic change within the Spanish language was not very pronounced. Further research on the modernization of historical languages is therefore strongly recommended.
131
+
132
+
133
+ # Training Details
134
+
135
+ ## Training Data
136
+
137
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
138
+
139
+ We built a parallel corpus of Spanish Golden Age theater texts with pairs of 44 Golden Age dramas in historical orthography and current orthography. Both corpora were aligned line by line to establish a ground truth for the translation between the different historical varieties of Spanish. The 44 dramas have been written by Juan Ruiz de Alarcón (5), Pedro Calderón de la Barca (28), Félix Lope de Vega Carpio (6), and Juan Pérez de Montalbán (5). The dataset is available on [Huggingface](https://huggingface.co/datasets/versae/modernisa).
140
+
141
+
142
+ ## Training Procedure
143
+
144
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
145
 
146
  ### Training hyperparameters
147
 
 
168
  | 0.0907 | 2.84 | 80000 | 0.1176 | 44.888 | 18.4465 |
169
 
170
 
171
+
172
  ### Framework versions
173
 
174
  - Transformers 4.13.0.dev0
175
  - Pytorch 1.10.0+cu111
176
  - Datasets 1.15.2.dev0
177
  - Tokenizers 0.10.3
178
+
179
+
180
+ ### Preprocessing
181
+
182
+
183
+
184
+
185
+ ### Speeds, Sizes, Times
186
+
187
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
188
+
189
+ After randomizing all 141,023 lines in the corpus, we split it into training (80%), validation (10%) and test (10%) sets stratifying by play. We then fine-tuned T5 and ByT5 base models on sequence lengths of 256 doing a grid search for 3 and 5 epochs, weight decay 0 and 0.01, learning rates of 0.001 and 0.0001, and with and without a “translate” prompt.
190
+
191
+ # Evaluation
192
+
193
+ <!-- This section describes the evaluation protocols and provides the results. -->
194
+
195
+ ## Testing Data, Factors & Metrics
196
+
197
+ ### Testing Data
198
+
199
+ <!-- This should link to a Data Card if possible. -->
200
+
201
+ A single drama by Lope de Vega (Castelvines y Monteses, 1647).
202
+
203
+
204
+ ### Factors
205
+
206
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
207
+
208
+ More information needed
209
+
210
+ ### Metrics
211
+
212
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
213
+
214
+ More information needed
215
+
216
+ ## Results
217
+
218
+ BLEU: 80.66
219
+ CER: 4.20%
220
+
221
+ # Model Examination
222
+
223
+ More information needed
224
+
225
+ # Environmental Impact
226
+
227
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
228
+
229
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
230
+
231
+ - **Hardware Type:** More information needed
232
+ - **Hours used:** More information needed
233
+ - **Cloud Provider:** More information needed
234
+ - **Compute Region:** More information needed
235
+ - **Carbon Emitted:** More information needed
236
+
237
+ # Technical Specifications [optional]
238
+
239
+ ## Model Architecture and Objective
240
+
241
+ More information needed
242
+
243
+ ## Compute Infrastructure
244
+
245
+ More information needed
246
+
247
+ ### Hardware
248
+
249
+ More information needed
250
+
251
+ ### Software
252
+
253
+ More information needed
254
+
255
+ # Citation
256
+
257
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
258
+
259
+ **BibTeX:**
260
+
261
+ @inproceedings{de_la_rosa_modernilproject_2022,
262
+ address = {Tokyo},
263
+ title = {The {Moderniſa} {Project}: {Orthographic} {Modernization} of {Spanish} {Golden} {Age} {Dramas} with {Language} {Models}},
264
+ shorttitle = {The {Moderniſa} {Project}},
265
+ url = {https://dh2022.dhii.asia/abstracts/files/DE_LA_ROSA_Javier_The_Moderni_a_Project__Orthographic_Modern.html},
266
+ language = {en},
267
+ publisher = {Alliance of Digital Humanities Organizations ADHO / The University of Tokyo, Japan},
268
+ author = {de la Rosa, Javier and Cuéllar, Álvaro and Lehmann, Jörg},
269
+ month = jul,
270
+ year = {2022},
271
+ }
272
+
273
+
274
+ **APA:**
275
+
276
+
277
+
278
+ # Glossary [optional]
279
+
280
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
281
+
282
+ More information needed
283
+
284
+ # More Information [optional]
285
+
286
+ More information needed
287
+
288
+ # Model Card Authors [optional]
289
+
290
+ <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
291
+
292
+ [Javier de la Rosa](https://huggingface.co/versae), [Jörg Lehmann](https://huggingface.co/Jrglmn), questions and comments about the model card can be directed to Jörg Lehmann at joerg.lehmann@sbb.spk-berlin.de
293
+
294
+ # Model Card Contact
295
+
296
+ [Jörg Lehmann](joerg.lehmann@sbb.spk-berlin.de)
297
+
298
+ # How to Get Started with the Model
299
+
300
+ Use the code below to get started with the model.
301
+
302
+ <details>
303
+ <summary> Click to expand </summary>
304
+
305
+ More information needed
306
+
307
+ </details>