jartine commited on
Commit
486244c
1 Parent(s): 14288fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -8
README.md CHANGED
@@ -61,17 +61,12 @@ You can then use the completion mode of the GUI to experiment with this
61
  model. You can prompt the model for completions on the command line too:
62
 
63
  ```
64
- ./Meta-Llama-3.1-405B.Q3_K_M.llamafile -p 'four score and seven' --log-disable
65
  ```
66
 
67
  This model has a max context window size of 128k tokens. By default, a
68
- context window size of 4096 tokens is used. You can use a larger context
69
- window by passing the `-c 8192` flag. The software currently has
70
- limitations in its llama v3.1 support that may prevent scaling to the
71
- full 128k size. See our
72
- [Phi-3-medium-128k-instruct-llamafile](https://huggingface.co/Mozilla/Phi-3-medium-128k-instruct-llamafile)
73
- repository for llamafiles that are known to work with a 128kb context
74
- size.
75
 
76
  On Windows there's a 4GB limit on executable sizes. You can work around
77
  that by downloading the [official llamafile
 
61
  model. You can prompt the model for completions on the command line too:
62
 
63
  ```
64
+ ./Meta-Llama-3.1-405B.Q2_K.llamafile -p 'four score and seven' --log-disable
65
  ```
66
 
67
  This model has a max context window size of 128k tokens. By default, a
68
+ context window size of 8192 tokens is used. You can use the maximum
69
+ context size by passing the `-c 0` flag.
 
 
 
 
 
70
 
71
  On Windows there's a 4GB limit on executable sizes. You can work around
72
  that by downloading the [official llamafile