LLMs Lost in Translation: M-ALERT uncovers Cross-Linguistic Safety Gaps
Paper
•
2412.15035
•
Published
•
4
We present a set of red-teamed models. Trained on the LUMI HPC in Finland (thus the name Aurora). The "-m" designation stands for multimodal, multilingual, multidomain mixture of expert (MOE) models, each of which we intend to research. As part of Ontocord.AI's dedication to lawful open science AI, we coordinated this effort of volunteers and contributed to the safety measures. This work should NOT be confused with the AuroraGPT, https://www.hpcwire.com/2023/11/13/training-of-1-trillion-parameter-scientific-ai-begins/.