Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints
TheBloke commited on
Commit
d3850bc
1 Parent(s): a043ea3

Maximum sequence length for a Llama 2 model is 4096

Browse files
Files changed (1) hide show
  1. tokenizer_config.json +1 -1
tokenizer_config.json CHANGED
@@ -19,7 +19,7 @@
19
  "single_word": false
20
  },
21
  "legacy": false,
22
- "model_max_length": 2000,
23
  "pad_token": null,
24
  "padding_side": "right",
25
  "sp_model_kwargs": {},
 
19
  "single_word": false
20
  },
21
  "legacy": false,
22
+ "model_max_length": 4096,
23
  "pad_token": null,
24
  "padding_side": "right",
25
  "sp_model_kwargs": {},