Text Generation
Transformers
PyTorch
English
llama
custom_code
text-generation-inference
Inference Endpoints
omkarthawakar Ashmal commited on
Commit
047baaf
1 Parent(s): 3775a87

Update README.md (#3)

Browse files

- Update README.md (1096b2aab0f9833a995211b1531b61eec6009ae3)


Co-authored-by: Ashmal Vayani <Ashmal@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -39,7 +39,7 @@ prompt = "What are the psychological effects of urban living on mental health?"
39
 
40
  input_str = template.format(prompt=prompt)
41
  input_ids = tokenizer(input_str, return_tensors="pt").input_ids
42
- outputs = model.generate(input_ids, max_length=1000)
43
  print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
44
  ```
45
 
@@ -87,4 +87,6 @@ python3 -m fastchat.serve.cli --model-path MBZUAI/MobiLlama-05B-Chat
87
 
88
 
89
  ## Intended Uses
90
- Given the nature of the training data, the MobiLlama-05B model is best suited for prompts using the QA format, the chat format, and the code format.
 
 
 
39
 
40
  input_str = template.format(prompt=prompt)
41
  input_ids = tokenizer(input_str, return_tensors="pt").input_ids
42
+ outputs = model.generate(input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
43
  print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
44
  ```
45
 
 
87
 
88
 
89
  ## Intended Uses
90
+ Given the nature of the training data, the MobiLlama-05B model is best suited for prompts using the QA format, the chat format, and the code format.
91
+
92
+ ## Citation