update model card
Browse files
README.md
CHANGED
@@ -45,7 +45,7 @@ base_model:
|
|
45 |
|
46 |
![](./images/salamandra_header.png)
|
47 |
|
48 |
-
# Salamandra Model Card
|
49 |
|
50 |
Salamandra is a highly multilingual model pre-trained from scratch that comes in three different
|
51 |
sizes — 2B, 7B and 40B parameters — with their respective base and instruction-tuned variants.
|
@@ -212,259 +212,6 @@ The remaining 10% comes from smaller sources in various languages.
|
|
212 |
The model was trained for 3 epochs, with two final rounds of 0.3B higher-quality tokens each,
|
213 |
meaning that the total number of tokens seen during pre-training amounts to roughly 7.8 trillion tokens.
|
214 |
|
215 |
-
We provide an extense Datasheet section following the best practices defined by [(Gebru et al., 2021)](https://arxiv.org/pdf/1803.09010).
|
216 |
-
|
217 |
-
<details>
|
218 |
-
<summary>Datasheet</summary>
|
219 |
-
|
220 |
-
#### Motivation
|
221 |
-
|
222 |
-
**For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.**
|
223 |
-
|
224 |
-
The purpose of creating this dataset is to pre-train the Salamandra family of multilingual models with high performance in a large number of
|
225 |
-
European languages (35) and code (including 92 different programming languages). In addition, we aim to represent especially the co-official
|
226 |
-
languages of Spain: Spanish, Catalan, Galician, and Basque. This is the reason why we carry out an oversampling of these languages.
|
227 |
-
|
228 |
-
We detected that there is a great lack of massive multilingual data, especially in minority languages (Ostendorff & Rehm, 2023), so part of
|
229 |
-
our efforts in the creation of this pre-training dataset have resulted in the contribution to large projects such as the Community OSCAR
|
230 |
-
(Brack et al., 2024), which includes 151 languages and 40T words, or CATalog (Palomar-Giner et al., 2024), the largest open dataset in
|
231 |
-
Catalan in the world.
|
232 |
-
|
233 |
-
**Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?**
|
234 |
-
|
235 |
-
The dataset has been created by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center - Centro Nacional de
|
236 |
-
Supercomputación (BSC-CNS), which aims to advance the field of natural language processing through cutting-edge research and development
|
237 |
-
and the use of HPC. In particular, it was created by the unit's data team, the main contributors being Javier Saiz, Ferran Espuña, and
|
238 |
-
Jorge Palomar.
|
239 |
-
|
240 |
-
However, the creation of the dataset would not have been possible without the collaboration of a large number of collaborators, partners,
|
241 |
-
and public institutions, which can be found in detail in the acknowledgements.
|
242 |
-
|
243 |
-
**Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.**
|
244 |
-
|
245 |
-
This work/research has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/).
|
246 |
-
|
247 |
-
#### Composition
|
248 |
-
|
249 |
-
**What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.**
|
250 |
-
|
251 |
-
The dataset consists entirely of text documents in various languages. Specifically, data was mainly sourced from the following databases and
|
252 |
-
repositories:
|
253 |
-
- **Common Crawl:** Repository that holds website data and is run by the Common Crawl non-profit organization. It is updated monthly and is
|
254 |
-
distributed under the CC0 1.0 public domain license.
|
255 |
-
- **GitHub:** Community platform that allows developers to create, store, manage, and share their code. Repositories are crawled and then
|
256 |
-
distributed with their original licenses, which may vary from permissive to non-commercial licenses.
|
257 |
-
- **Wikimedia:** Database that holds the collection databases managed by the Wikimedia Foundation, including Wikipedia, Wikibooks, Wikinews,
|
258 |
-
Wikiquote, Wikisource, and Wikivoyage. It is updated monthly and is distributed under Creative Commons Attribution-ShareAlike License 4.0.
|
259 |
-
- **EurLex:** Repository that holds the collection of legal documents from the European Union, available in all of the EU’s 24 official
|
260 |
-
languages and run by the Publications Office of the European Union. It is updated daily and is distributed under the Creative Commons
|
261 |
-
Attribution 4.0 International license.
|
262 |
-
- **Other repositories:** Specific repositories were crawled under permission for domain-specific corpora, which include academic, legal,
|
263 |
-
and newspaper repositories.
|
264 |
-
|
265 |
-
We provide a complete list of dataset sources at the end of this section.
|
266 |
-
|
267 |
-
**How many instances are there in total (of each type, if appropriate)?**
|
268 |
-
|
269 |
-
The dataset contains a diverse range of instances across multiple languages, with notable adjustments for certain languages. English
|
270 |
-
represents the largest portion, accounting for 39.08% of the total data. Spanish was upsampled by a factor of 2, bringing its share to 16.59%,
|
271 |
-
while Catalan (1.84%), Basque (0.26%), and Galician (0.36%) were also upsampled by 2. On the other hand, code-related data was downsampled
|
272 |
-
by half, making up 6.42% of the total. Other prominent languages include French (6.59%), Russian (5.39%), German (4.25%), and Hungarian
|
273 |
-
(3.93%), with several additional languages contributing between 1% and 2%, and smaller portions represented by a variety of others.
|
274 |
-
|
275 |
-
**Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).**
|
276 |
-
|
277 |
-
The dataset is a sample from multiple sources, with different weights based on the primary language of the content: Spanish, Catalan,
|
278 |
-
Basque, and Galician content was upsampled by a factor of two, while programming languages were downsampled by a factor of half. Other
|
279 |
-
sources were sampled in proportion to their occurrence.
|
280 |
-
|
281 |
-
**What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description.**
|
282 |
-
|
283 |
-
Each instance consists of a text document processed for deduplication, language identification, and source-specific filtering. Some
|
284 |
-
documents required optical character recognition (OCR) to extract text from non-text formats such as PDFs.
|
285 |
-
|
286 |
-
**Is there a label or target associated with each instance? If so, please provide a description.**
|
287 |
-
|
288 |
-
Each instance is labeled with a unique identifier, the primary language of the content, and the URL for web-sourced instances. Additional
|
289 |
-
labels were automatically assigned to detect specific types of content —harmful or toxic content— and to assign preliminary indicators of
|
290 |
-
undesired qualities —very short documents, high density of symbols, etc.— which were used for filtering instances.
|
291 |
-
|
292 |
-
**Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.**
|
293 |
-
|
294 |
-
No significant information is missing from the instances.
|
295 |
-
|
296 |
-
**Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit.**
|
297 |
-
|
298 |
-
Instances are related through shared metadata, such as source and language identifiers.
|
299 |
-
|
300 |
-
**Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.**
|
301 |
-
|
302 |
-
The dataset is split randomly into training, validation, and test sets.
|
303 |
-
|
304 |
-
**Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.**
|
305 |
-
|
306 |
-
Despite removing duplicated instances within each source, redundancy remains at the paragraph and sentence levels, particularly in
|
307 |
-
web-sourced instances where SEO techniques and templates contribute to repeated textual patterns. Some instances may also be duplicated
|
308 |
-
across sources due to format variations.
|
309 |
-
|
310 |
-
**Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.**
|
311 |
-
|
312 |
-
The dataset is self-contained and does not rely on external resources.
|
313 |
-
|
314 |
-
**Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description.**
|
315 |
-
|
316 |
-
The dataset does not contain confidential data.
|
317 |
-
|
318 |
-
**Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. If the dataset does not relate to people, you may skip the remaining questions in this section.**
|
319 |
-
|
320 |
-
The dataset includes web-crawled content, which may overrepresent pornographic material across languages (Kreutzer et al., 2022). Although
|
321 |
-
pre-processing techniques were applied to mitigate offensive content, the heterogeneity and scale of web-sourced data make exhaustive
|
322 |
-
filtering challenging, which makes it next to impossible to identify all adult content without falling into excessive filtering, which may
|
323 |
-
negatively influence certain demographic groups (Dodge et al., 2021).
|
324 |
-
|
325 |
-
**Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.**
|
326 |
-
|
327 |
-
The dataset does not explicitly identify any subpopulations.
|
328 |
-
|
329 |
-
**Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how.**
|
330 |
-
|
331 |
-
Web-sourced instances in the dataset may contain personally identifiable information (PII) that is publicly available on the Web, such as
|
332 |
-
names, IP addresses, email addresses, and phone numbers. While it would be possible to indirectly identify individuals through the
|
333 |
-
combination of multiple data points, the nature and scale of web data makes it difficult to parse such information. In any case, efforts are
|
334 |
-
made to filter or anonymize sensitive data during pre-processing, but some identifiable information may remain in the dataset.
|
335 |
-
|
336 |
-
**Does the dataset contain data that might be considered sensitive in any way? If so, please provide a description.**
|
337 |
-
|
338 |
-
Given that the dataset includes web-sourced content and other publicly available documents, instances may inadvertently reveal financial
|
339 |
-
information, health-related details, or forms of government identification, such as social security numbers (Subramani et al., 2023),
|
340 |
-
especially if the content originates from less-regulated sources or user-generated platforms.
|
341 |
-
|
342 |
-
#### Collection Process
|
343 |
-
|
344 |
-
**How was the data collected?**
|
345 |
-
|
346 |
-
This dataset is constituted by combining several sources, whose acquisition methods can be classified into three groups:
|
347 |
-
- Web-sourced datasets with some preprocessing available under permissive license (p.e. Common Crawl).
|
348 |
-
- Domain-specific or language-specific raw crawls (p.e. Spanish Crawling).
|
349 |
-
- Manually curated data obtained through collaborators, data providers (by means of legal assignment agreements) or open source projects
|
350 |
-
(p.e. CATalog).
|
351 |
-
|
352 |
-
**What mechanisms or procedures were used to collect the data? How were these mechanisms or procedures validated?**
|
353 |
-
|
354 |
-
According to the three groups previously defined, these are the mechanisms used in each of them:
|
355 |
-
- Open direct download. Validation: data integrity tests.
|
356 |
-
- Ad-hoc scrapers or crawlers. Validation: software unit and data integrity tests.
|
357 |
-
- Direct download via FTP, SFTP, API or S3. Validation: data integrity tests.
|
358 |
-
|
359 |
-
**If the dataset is a sample from a larger set, what was the sampling strategy?**
|
360 |
-
|
361 |
-
The sampling strategy was to use the whole dataset resulting from the filtering explained in the ‘preprocessing/cleaning/labelling’ section,
|
362 |
-
with the particularity that an upsampling of 2 (i.e. twice the probability of sampling a document) was performed for the co-official
|
363 |
-
languages of Spain (Spanish, Catalan, Galician, Basque), and a downsampling of 1/2 was applied for code (half the probability of sampling a
|
364 |
-
code document, evenly distributed among all programming languages).
|
365 |
-
|
366 |
-
**Who was involved in the data collection process and how were they compensated?**
|
367 |
-
|
368 |
-
This data is generally extracted, filtered and sampled by automated processes. The code required to run these processes has been developed
|
369 |
-
entirely by members of the LangTech data team, or otherwise obtained from open-source software. Furthermore, there has been no monetary
|
370 |
-
consideration for acquiring data from suppliers.
|
371 |
-
|
372 |
-
**Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances? If not, please describe the timeframe in which the data associated with the instances was created.**
|
373 |
-
|
374 |
-
Data were acquired and processed from April 2023 to April 2024. However, as mentioned, much data has been obtained from open projects such
|
375 |
-
as Common Crawl, which contains data from 2014, so it is the end date (04/2024) rather than the start date that is important.
|
376 |
-
|
377 |
-
**Were any ethical review processes conducted? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.**
|
378 |
-
|
379 |
-
No particular ethical review process has been carried out as the data is mostly open and not particularly sensitive. However, we have an
|
380 |
-
internal evaluation team and a bias team to monitor ethical issues. In addition, we work closely with ‘Observatori d'Ètica en Intel·ligència
|
381 |
-
Artificial’ (OEIAC) and ‘Agencia Española de Supervisión de la Inteligencia Artificial’ (AESIA) to audit the processes we carry out from an
|
382 |
-
ethical and legal point of view, respectively.
|
383 |
-
|
384 |
-
#### Preprocessing
|
385 |
-
|
386 |
-
**Was any preprocessing/cleaning/labeling of the data done? If so, please provide a description. If not, you may skip the remaining questions in this section.**
|
387 |
-
|
388 |
-
Instances of text documents were not altered, but web-sourced documents were filtered based on specific criteria along two dimensions:
|
389 |
-
- Quality: documents with a score lower than 0.8, based on undesired qualities, such as documents with low number of lines, very short
|
390 |
-
sentences, presence of long footers and headers, and high percentage of punctuation, obtained through CURATE (Palomar-Giner et al., 2024)
|
391 |
-
were filtered out.
|
392 |
-
- Harmful or adult content: documents originating from Colossal OSCAR were filtered using LLM-Datasets (Ostendorff et al., 2024) based on
|
393 |
-
the perplexity from a language model (‘harmful_pp’ field) provided by the Ungoliant pipeline (Abadji et al., 2021).
|
394 |
-
|
395 |
-
**Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data? If so, please provide a link or other access point to the “raw” data.**
|
396 |
-
|
397 |
-
The original raw data was not kept.
|
398 |
-
|
399 |
-
**Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point.**
|
400 |
-
|
401 |
-
Yes, the preprocessing and filtering software is open-sourced. The [CURATE](https://github.com/langtech-bsc/CURATE) pipeline was used for Spanish Crawling and CATalog,
|
402 |
-
and the [Ungoliant](https://github.com/oscar-project/ungoliant) pipeline was used for the OSCAR project.
|
403 |
-
|
404 |
-
#### Uses
|
405 |
-
|
406 |
-
**Has the dataset been used for any tasks already? If so, please provide a description.**
|
407 |
-
|
408 |
-
Pre-train the Salamandra model family.
|
409 |
-
|
410 |
-
**What (other) tasks could the dataset be used for?**
|
411 |
-
|
412 |
-
The data can be used primarily to pre-train other language models, which can then be used for a wide range of use cases. The dataset could
|
413 |
-
also be used for other tasks such as fine-tuning language models, cross-lingual NLP tasks, machine translation, domain-specific text
|
414 |
-
generation, and language-specific data analysis.
|
415 |
-
|
416 |
-
**Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? Is there anything a dataset consumer could do to mitigate these risks or harms?**
|
417 |
-
|
418 |
-
Web-crawled content is over-represented with standard language varieties, impacting language model performance for minority languages.
|
419 |
-
Language diversity in data is crucial to avoid bias, especially in encoding non-standard dialects, preventing the exclusion of demographic
|
420 |
-
groups. Moreover, despite legal uncertainties in web-scraped data, we prioritize permissive licenses and privacy protection measures,
|
421 |
-
acknowledging the challenges posed by personally identifiable information (PII) within large-scale datasets. Our ongoing efforts aim to
|
422 |
-
address privacy concerns and contribute to a more inclusive linguistic dataset.
|
423 |
-
|
424 |
-
**Are there tasks for which the dataset should not be used?**
|
425 |
-
|
426 |
-
-
|
427 |
-
|
428 |
-
#### Distribution
|
429 |
-
|
430 |
-
**Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset was created? If so, please provide a description.**
|
431 |
-
|
432 |
-
The dataset will not be released or distributed to third parties. Any related question to distribution is omitted in this section.
|
433 |
-
|
434 |
-
#### Maintenance
|
435 |
-
|
436 |
-
**Who will be supporting/hosting/maintaining the dataset?**
|
437 |
-
|
438 |
-
The dataset will be hosted by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center (BSC). The team will ensure
|
439 |
-
regular updates and monitor the dataset for any issues related to content integrity, legal compliance, and bias for the sources they are
|
440 |
-
responsible for.
|
441 |
-
|
442 |
-
**How can the owner/curator/manager of the dataset be contacted?**
|
443 |
-
|
444 |
-
The data owner may be contacted with the email address langtech@bsc.es.
|
445 |
-
|
446 |
-
**Will the dataset be updated?**
|
447 |
-
|
448 |
-
The dataset will not be updated.
|
449 |
-
|
450 |
-
**If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances? If so, please describe these limits and explain how they will be enforced.**
|
451 |
-
|
452 |
-
The dataset does not keep sensitive data that could allow direct identification of individuals, apart from the data that is publicly
|
453 |
-
available in web-sourced content. Due to the sheer volume and diversity of web data, it is not feasible to notify individuals or manage data
|
454 |
-
retention on an individual basis. However, efforts are made to mitigate the risks associated with sensitive information through
|
455 |
-
pre-processing and filtering to remove identifiable or harmful content. Despite these measures, vigilance is maintained to address potential
|
456 |
-
privacy and ethical issues.
|
457 |
-
|
458 |
-
**Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers.**
|
459 |
-
|
460 |
-
Since the dataset will not be updated, only the final version will be kept.
|
461 |
-
|
462 |
-
**If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?**
|
463 |
-
|
464 |
-
The dataset does not allow for external contributions.
|
465 |
-
|
466 |
-
</details>
|
467 |
-
|
468 |
### Finetuning Data
|
469 |
|
470 |
This instruction-tuned variant has been trained with a mixture of 276k English, Spanish, and Catalan multi-turn instructions gathered from open datasets:
|
|
|
45 |
|
46 |
![](./images/salamandra_header.png)
|
47 |
|
48 |
+
# Salamandra Model Card (Aina Hack)
|
49 |
|
50 |
Salamandra is a highly multilingual model pre-trained from scratch that comes in three different
|
51 |
sizes — 2B, 7B and 40B parameters — with their respective base and instruction-tuned variants.
|
|
|
212 |
The model was trained for 3 epochs, with two final rounds of 0.3B higher-quality tokens each,
|
213 |
meaning that the total number of tokens seen during pre-training amounts to roughly 7.8 trillion tokens.
|
214 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
215 |
### Finetuning Data
|
216 |
|
217 |
This instruction-tuned variant has been trained with a mixture of 276k English, Spanish, and Catalan multi-turn instructions gathered from open datasets:
|