Example code for local inference
#2
by
anto5040
- opened
I would like the following code (tested, it can run) for the model card to be added:
from transformers import pipeline
pipe = pipeline("text-generation", model="orai-nlp/Llama-eus-8B", device="cuda")
text = "Euskara adimen artifizialera iritsi da!"
pipe(text, max_new_tokens=50, num_beams=5)
I don't know if other libraries can be used, but it's working with transformers version 4.46.2
Hi @anto5040 ,
Thanks for the comment. You can find that code clicking on the "Use this model" button on the right side of the model page.
Cheers.
isanvicente
changed discussion status to
closed
isanvicente
changed discussion status to
open
isanvicente
changed discussion status to
closed