|
--- |
|
license: afl-3.0 |
|
|
|
language: |
|
- yo |
|
|
|
datasets: |
|
- afriqa |
|
- xlsum |
|
- menyo20k_mt |
|
- alpaca-gpt4 |
|
--- |
|
|
|
# Model Description |
|
**mistral_7b_yo_instruct** is a **text generation** model in Yorùbá. |
|
|
|
## Intended uses & limitations |
|
#### How to use |
|
|
|
```python |
|
|
|
import requests |
|
|
|
API_URL = "https://i8nykns7vw253vx3.us-east-1.aws.endpoints.huggingface.cloud" |
|
headers = { |
|
"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", |
|
"Content-Type": "application/json" |
|
} |
|
|
|
def query(payload): |
|
response = requests.post(API_URL, headers=headers, json=payload) |
|
return response.json() |
|
|
|
# Prompt content: "Pẹlẹ o. Bawo ni o se wa?" ("Hello. How are you?") |
|
output = query({ |
|
"inputs": "Pẹlẹ o. Bawo ni o se wa?", |
|
}) |
|
|
|
# Model response: "O dabo. O jẹ ọjọ ti o dara." ("I am safe. It was a good day.") |
|
print(output) |
|
``` |
|
|
|
#### Eval results |
|
Coming soon |
|
|
|
#### Limitations and bias |
|
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. |
|
|
|
#### Training data |
|
This model is fine-tuned on 60k+ instruction-following demonstrations built from an aggregation of datasets ([AfriQA](https://huggingface.co/datasets/masakhane/afriqa), [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum), [MENYO-20k](https://huggingface.co/datasets/menyo20k_mt)), and translations of [Alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4)). |
|
|
|
### Use and safety |
|
We emphasize that mistral_7b_yo_instruct is intended only for research purposes and is not ready to be deployed for general use, namely because we have not designed adequate safety measures. |