Update README.md
Browse files
README.md
CHANGED
@@ -73,7 +73,7 @@ Quantizing the model significantly reduces memory usage while maintaining good p
|
|
73 |
| Model Version | Memory Usage |
|
74 |
|--------------|-------------|
|
75 |
| Base Model | ~3.6GB |
|
76 |
-
|
|
77 |
|
78 |
## License
|
79 |
This model follows the `apache-2.0` license.
|
|
|
73 |
| Model Version | Memory Usage |
|
74 |
|--------------|-------------|
|
75 |
| Base Model | ~3.6GB |
|
76 |
+
| 8-bit Quantized | ~2.25GB |
|
77 |
|
78 |
## License
|
79 |
This model follows the `apache-2.0` license.
|