javi8979 commited on
Commit
348bb9a
·
verified ·
1 Parent(s): 9d9580e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -51,6 +51,32 @@ SalamandraTA-7b-instruct is a translation LLM that has been instruction-tuned fr
51
  > **DISCLAIMER:** This version of Salamandra is tailored exclusively for translation tasks. It lacks chat capabilities and has not been trained with any chat instructions.
52
 
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  ## How to use
55
 
56
  You can translate between the following 37 languages:
 
51
  > **DISCLAIMER:** This version of Salamandra is tailored exclusively for translation tasks. It lacks chat capabilities and has not been trained with any chat instructions.
52
 
53
 
54
+ ---
55
+
56
+ ## Hardware and Software
57
+
58
+ ### Training Framework
59
+
60
+ SalamandraTA-7b-base was continually pre-trained using NVIDIA’s [NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html),
61
+ which leverages PyTorch Lightning for efficient model training in highly distributed settings.
62
+
63
+ SalamandraTA-7b-instruct was produced with [FastChat](https://github.com/lm-sys/FastChat).
64
+
65
+ ### Compute Infrastructure
66
+
67
+ All models were trained on [MareNostrum 5](https://www.bsc.es/ca/marenostrum/marenostrum-5), a pre-exascale EuroHPC supercomputer hosted and
68
+ operated by Barcelona Supercomputing Center.
69
+
70
+ The accelerated partition is composed of 1,120 nodes with the following specifications:
71
+ - 4x Nvidia Hopper GPUs with 64GB HBM2 memory
72
+ - 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
73
+ - 4x NDR200 (BW per node 800Gb/s)
74
+ - 512 GB of Main memory (DDR5)
75
+ - 460GB on NVMe storage
76
+
77
+ ---
78
+
79
+
80
  ## How to use
81
 
82
  You can translate between the following 37 languages: