Update README.md
Browse files
README.md
CHANGED
@@ -28,11 +28,10 @@ lyraChatGLM is currently the **fastest ChatGLM-6B** available. To the b
|
|
28 |
|
29 |
The inference speed of lyraChatGLM has achieved **300x** acceleration upon the early original version. We are still working hard to further improve the performance.
|
30 |
|
31 |
-
Among its main features are:
|
32 |
- weights: original ChatGLM-6B weights released by THUDM.
|
33 |
- device: Nvidia GPU with Amperer architecture or Volta architecture (A100, A10, V100...).
|
34 |
-
- batch_size: compiled with dynamic batch size, maximum depends on device.
|
35 |
-
New Features (2023-06-20)
|
36 |
- We now support cuda version of both 11.X and 12.X
|
37 |
- lyraChatGLM has been further optimized, with faster model load speed from few minutes to less than 10s for non-int8 mode, and around 1 min for int8 mode!
|
38 |
|
|
|
28 |
|
29 |
The inference speed of lyraChatGLM has achieved **300x** acceleration upon the early original version. We are still working hard to further improve the performance.
|
30 |
|
31 |
+
Among its main features are (updated on 2023-06-20):
|
32 |
- weights: original ChatGLM-6B weights released by THUDM.
|
33 |
- device: Nvidia GPU with Amperer architecture or Volta architecture (A100, A10, V100...).
|
34 |
+
- batch_size: compiled with dynamic batch size, maximum depends on device.
|
|
|
35 |
- We now support cuda version of both 11.X and 12.X
|
36 |
- lyraChatGLM has been further optimized, with faster model load speed from few minutes to less than 10s for non-int8 mode, and around 1 min for int8 mode!
|
37 |
|