|
--- |
|
license: llama2 |
|
language: |
|
- am |
|
pipeline_tag: text2text-generation |
|
--- |
|
|
|
This model pairs with the PRETRAINED variant of amharic llama. Available here: https://huggingface.co/iocuydi/llama-2-amharic-3784m/tree/main/pretrained |
|
|
|
|
|
It will also require Llama2 weights and this clip model: https://huggingface.co/openai/clip-vit-large-patch14-336 |
|
|
|
|
|
More information on running the model here: https://github.com/iocuydi/amharic-llama-llava |
|
|
|
Cite: |
|
``` |
|
@misc{andersland2024amharic, |
|
title={Amharic LLaMA and LLaVA: Multimodal LLMs for Low Resource Languages}, |
|
author={Michael Andersland}, |
|
year={2024}, |
|
eprint={2403.06354}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |