brittlewis12 commited on
Commit
8d0e843
1 Parent(s): 5e69231

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -31,7 +31,7 @@ quantized_by: brittlewis12
31
  > Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
32
 
33
  This repo contains GGUF format model files for Meta’s Llama-3-8B-Instruct,
34
- **updated as of 2024-04-20** to handle the `<|eot_id|>` special token as EOS token.
35
 
36
  Learn more on Meta’s [Llama 3 page](https://llama.meta.com/llama3).
37
 
@@ -39,7 +39,7 @@ Learn more on Meta’s [Llama 3 page](https://llama.meta.com/llama3).
39
 
40
  GGUF is a file format for representing AI models. It is the third version of the format,
41
  introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
42
- Converted with llama.cpp build 2700 (revision [aed82f6](https://github.com/ggerganov/llama.cpp/commit/aed82f6837a3ea515f4d50201cfc77effc7d41b4)),
43
  using [autogguf](https://github.com/brittlewis12/autogguf).
44
 
45
  ### Prompt template
 
31
  > Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
32
 
33
  This repo contains GGUF format model files for Meta’s Llama-3-8B-Instruct,
34
+ **updated as of 2024-04-29** to incorporate tokenization improvements, as well as previous interventions to handle the `<|eot_id|>` special token as EOS token.
35
 
36
  Learn more on Meta’s [Llama 3 page](https://llama.meta.com/llama3).
37
 
 
39
 
40
  GGUF is a file format for representing AI models. It is the third version of the format,
41
  introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
42
+ Converted with llama.cpp build 2763 (revision [ffe666](https://github.com/ggerganov/llama.cpp/commits/ffe666572f98a686b17a2cd1dbf4c0a982e5ac0a)),
43
  using [autogguf](https://github.com/brittlewis12/autogguf).
44
 
45
  ### Prompt template