Text Generation
Transformers
Safetensors
Basque
llama
text-generation-inference
Inference Endpoints

Example code for local inference

#2
by anto5040 - opened

I would like the following code (tested, it can run) for the model card to be added:

from transformers import pipeline

pipe = pipeline("text-generation", model="orai-nlp/Llama-eus-8B", device="cuda")

text = "Euskara adimen artifizialera iritsi da!"

pipe(text, max_new_tokens=50, num_beams=5)

I don't know if other libraries can be used, but it's working with transformers version 4.46.2

Orai NLP technologies org

Hi @anto5040 ,

Thanks for the comment. You can find that code clicking on the "Use this model" button on the right side of the model page.
irudia.png

Cheers.

isanvicente changed discussion status to closed
isanvicente changed discussion status to open
isanvicente changed discussion status to closed

Sign up or log in to comment