prithivMLmods commited on
Commit
5120de4
·
verified ·
1 Parent(s): 5a0e00a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -7,6 +7,7 @@ library_name: transformers
7
  tags:
8
  - gemma2
9
  - text-generation-inference
 
10
  ---
11
  ![gwq.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/OSjnMjpPnapGr4Kguuh2O.png)
12
 
@@ -110,4 +111,4 @@ print(tokenizer.decode(outputs[0]))
110
  For optimal performance on domain-specific tasks, fine-tuning on relevant datasets is required, which demands additional computational resources and expertise.
111
 
112
  7. **Context Length Limitation:**
113
- The model’s ability to process long documents is limited by its maximum context window size. If the input exceeds this limit, truncation may lead to loss of important information.
 
7
  tags:
8
  - gemma2
9
  - text-generation-inference
10
+ - f16
11
  ---
12
  ![gwq.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/OSjnMjpPnapGr4Kguuh2O.png)
13
 
 
111
  For optimal performance on domain-specific tasks, fine-tuning on relevant datasets is required, which demands additional computational resources and expertise.
112
 
113
  7. **Context Length Limitation:**
114
+ The model’s ability to process long documents is limited by its maximum context window size. If the input exceeds this limit, truncation may lead to loss of important information.