kvaishnavi commited on
Commit
4236bc7
·
verified ·
1 Parent(s): d3da9b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -53
README.md CHANGED
@@ -81,69 +81,69 @@ The table below shows the average throughput of the first 256 tokens generated (
81
 
82
  | Batch Size, Sequence Length | ONNX RT INT4 | PyTorch Eager INT4 | PyTorch Compile INT4 | Llama.cpp INT4 | INT4 SpeedUp ORT/PyTorch Eager | INT4 SpeedUp ORT/PyTorch Compile | INT4 SpeedUp ORT/Llama.cpp |
83
  | --- | --- | --- | --- | --- | --- | --- | --- |
84
- | 1,16 | 238.97 | 17.75 | 11.36 | 183.17 | 13.46 | 21.04 | 1.30 |
85
- | 1,64 | 233.74 | 17.74 | 11.32 | 182.77 | 13.17 | 20.65 | 1.28 |
86
- | 1,256 | 208.52 | 17.82 | 11.34 | 182.15 | 11.70 | 18.38 | 1.14 |
87
- | 1,1024 | 174.19 | 17.85 | 11.36 | 166.39 | 9.76 | 15.34 | 1.05 |
88
- | 1,2048 | 146.10 | 17.96 | 11.35 | 153.50 | 8.14 | 12.87 | 0.95 |
89
- | 1,3840 | 112.68 | 17.91 | 11.34 | 141.53 | 6.29 | 9.94 | 0.80 |
90
- | 4,16 | 286.73 | 60.90 | 40.89 | 180.82 | 4.71 | 7.01 | 1.59 |
91
- | 4,64 | 282.87 | 60.88 | 41.03 | 177.69 | 4.65 | 6.89 | 1.59 |
92
- | 4,256 | 268.30 | 60.85 | 40.90 | 166.34 | 4.41 | 6.56 | 1.61 |
93
- | 4,1024 | 223.30 | 60.86 | 40.90 | 133.39 | 3.67 | 5.46 | 1.67 |
94
- | 4,2048 | 187.62 | 60.80 | 40.93 | 106.03 | 3.09 | 4.58 | 1.77 |
95
- | 4,3840 | 145.59 | 55.96 | 40.88 | 78.12 | 2.60 | 3.56 | 1.86 |
96
- | 8,16 | 541.04 | 121.92 | 81.96 | 171.90 | 4.44 | 6.60 | 3.15 |
97
- | 8,64 | 532.68 | 121.87 | 81.98 | 166.33 | 4.37 | 6.50 | 3.20 |
98
- | 8,256 | 480.00 | 122.06 | 81.80 | 148.07 | 3.93 | 5.87 | 3.24 |
99
- | 8,1024 | 360.60 | 122.48 | 81.59 | 103.58 | 2.94 | 4.42 | 3.48 |
100
- | 8,2048 | 274.16 | 105.92 | 81.71 | 74.01 | 2.59 | 3.36 | 3.70 |
101
- | 8,3840 | 192.50 | 79.74 | 81.50 | 49.23 | 2.41 | 2.36 | 3.91 |
102
- | 16,16 | 1007.69 | 244.16 | 163.09 | 156.99 | 4.13 | 6.18 | 6.42 |
103
- | 16,64 | 966.42 | 244.89 | 163.26 | 148.23 | 3.95 | 5.92 | 6.52 |
104
- | 16,256 | 827.37 | 244.84 | 163.23 | 121.85 | 3.38 | 5.07 | 6.79 |
105
- | 16,1024 | 536.73 | 209.13 | 169.30 | 71.57 | 2.57 | 3.17 | 7.50 |
106
- | 16,2048 | 375.31 | 153.95 | 158.77 | 45.97 | 2.44 | 2.36 | 8.16 |
107
- | 16,3840 | 243.66 | OOM | OOM | 28.33 | | | 8.60 |
108
 
109
 
110
  | Batch Size, Sequence Length | ONNX RT FP16 | PyTorch Eager FP16 | PyTorch Compile FP16 | Llama.cpp | FP16 SpeedUp ORT/PyTorch Eager | FP16 SpeedUp ORT/PyTorch Compile | FP16 SpeedUp ORT/Llama.cpp |
111
  | --- | --- | --- | --- | --- | --- | --- | --- |
112
- | 1,16 | 137.30 | 26.02 | 26.83 | 125.86 | 5.28 | 5.12 | 1.09 |
113
- | 1,64 | 135.79 | 26.01 | 26.48 | 125.75 | 5.22 | 5.13 | 1.08 |
114
- | 1,256 | 127.92 | 26.17 | 26.61 | 125.24 | 4.89 | 4.81 | 1.02 |
115
- | 1,1024 | 114.08 | 26.11 | 26.63 | 117.97 | 4.37 | 4.28 | 0.97 |
116
- | 1,2048 | 101.68 | 17.77 | 21.05 | 111.08 | 5.72 | 4.83 | 0.92 |
117
- | 1,3840 | 84.94 | 25.17 | 26.77 | 104.88 | 3.37 | 3.17 | 0.81 |
118
- | 4,16 | 529.07 | 99.47 | 100.22 | 124.63 | 5.32 | 5.28 | 4.25 |
119
- | 4,64 | 513.85 | 99.47 | 100.54 | 123.20 | 5.17 | 5.11 | 4.17 |
120
- | 4,256 | 466.56 | 99.21 | 100.22 | 117.61 | 4.70 | 4.66 | 3.97 |
121
- | 4,1024 | 352.06 | 99.56 | 100.50 | 100.42 | 3.54 | 3.50 | 3.51 |
122
- | 4,2048 | 271.02 | 70.12 | 73.66 | 83.95 | 3.86 | 3.68 | 3.23 |
123
- | 4,3840 | 191.36 | 74.35 | 79.68 | 65.51 | 2.57 | 2.40 | 2.92 |
124
- | 8,16 | 936.46 | 198.99 | 212.40 | 120.24 | 4.71 | 4.41 | 7.79 |
125
- | 8,64 | 926.83 | 200.28 | 213.97 | 117.77 | 4.63 | 4.33 | 7.87 |
126
- | 8,256 | 783.95 | 200.66 | 214.88 | 108.33 | 3.91 | 3.65 | 7.24 |
127
- | 8,1024 | 511.96 | 183.10 | 201.01 | 82.52 | 2.80 | 2.55 | 6.20 |
128
- | 8,2048 | 352.86 | 96.99 | 122.10 | 62.41 | 3.64 | 2.89 | 5.65 |
129
- | 8,3840 | 228.97 | 96.81 | 101.60 | 43.89 | 2.37 | 2.25 | 5.22 |
130
- | 16,16 | 1675.72 | 396.52 | 422.13 | 112.78 | 4.23 | 3.97 | 14.86 |
131
- | 16,64 | 1591.61 | 395.66 | 422.47 | 108.36 | 4.02 | 3.77 | 14.69 |
132
- | 16,256 | 1249.94 | 399.30 | 429.10 | 93.68 | 3.13 | 2.91 | 13.34 |
133
- | 16,1024 | 685.63 | 270.99 | 292.24 | 60.66 | 2.53 | 2.35 | 11.30 |
134
- | 16,2048 | 441.15 | 121.17 | 162.93 | 41.30 | 3.64 | 2.71 | 10.68 |
135
- | 16,3840 | 270.38 | OOM | OOM | 26.50 | 0.00 | 0.00 | 10.20 |
136
 
137
 
138
  The table below shows the average throughput of the first 256 tokens generated (tps) for INT4 precision on CPU as measured on a Standard D16s v6 (16 vcpus, 64 GiB memory)
139
 
140
  | Batch Size, Sequence Length | ORT INT4 AWQ | Llama.cpp INT4 | INT4 AWQ SpeedUp Llama.cpp |
141
  | --- | --- | --- | --- |
142
- | 1,16 | 41.99 | 26.72 | 1.57 |
143
- | 1,64 | 41.81 | 26.67 | 1.57 |
144
- | 1,256 | 41.26 | 26.30 | 1.57 |
145
- | 1,1024 | 37.15 | 24.02 | 1.55 |
146
- | 1,2048 | 32.68 | 21.82 | 1.50 |
147
 
148
 
149
  ## Package Versions
 
81
 
82
  | Batch Size, Sequence Length | ONNX RT INT4 | PyTorch Eager INT4 | PyTorch Compile INT4 | Llama.cpp INT4 | INT4 SpeedUp ORT/PyTorch Eager | INT4 SpeedUp ORT/PyTorch Compile | INT4 SpeedUp ORT/Llama.cpp |
83
  | --- | --- | --- | --- | --- | --- | --- | --- |
84
+ | 1, 16 | 238.97 | 17.75 | 11.36 | 183.17 | 13.46 | 21.04 | 1.30 |
85
+ | 1, 64 | 233.74 | 17.74 | 11.32 | 182.77 | 13.17 | 20.65 | 1.28 |
86
+ | 1, 256 | 208.52 | 17.82 | 11.34 | 182.15 | 11.70 | 18.38 | 1.14 |
87
+ | 1, 1024 | 174.19 | 17.85 | 11.36 | 166.39 | 9.76 | 15.34 | 1.05 |
88
+ | 1, 2048 | 146.10 | 17.96 | 11.35 | 153.50 | 8.14 | 12.87 | 0.95 |
89
+ | 1, 3840 | 112.68 | 17.91 | 11.34 | 141.53 | 6.29 | 9.94 | 0.80 |
90
+ | 4, 16 | 286.73 | 60.90 | 40.89 | 180.82 | 4.71 | 7.01 | 1.59 |
91
+ | 4, 64 | 282.87 | 60.88 | 41.03 | 177.69 | 4.65 | 6.89 | 1.59 |
92
+ | 4, 256 | 268.30 | 60.85 | 40.90 | 166.34 | 4.41 | 6.56 | 1.61 |
93
+ | 4, 1024 | 223.30 | 60.86 | 40.90 | 133.39 | 3.67 | 5.46 | 1.67 |
94
+ | 4, 2048 | 187.62 | 60.80 | 40.93 | 106.03 | 3.09 | 4.58 | 1.77 |
95
+ | 4, 3840 | 145.59 | 55.96 | 40.88 | 78.12 | 2.60 | 3.56 | 1.86 |
96
+ | 8, 16 | 541.04 | 121.92 | 81.96 | 171.90 | 4.44 | 6.60 | 3.15 |
97
+ | 8, 64 | 532.68 | 121.87 | 81.98 | 166.33 | 4.37 | 6.50 | 3.20 |
98
+ | 8, 256 | 480.00 | 122.06 | 81.80 | 148.07 | 3.93 | 5.87 | 3.24 |
99
+ | 8, 1024 | 360.60 | 122.48 | 81.59 | 103.58 | 2.94 | 4.42 | 3.48 |
100
+ | 8, 2048 | 274.16 | 105.92 | 81.71 | 74.01 | 2.59 | 3.36 | 3.70 |
101
+ | 8, 3840 | 192.50 | 79.74 | 81.50 | 49.23 | 2.41 | 2.36 | 3.91 |
102
+ | 16, 16 | 1007.69 | 244.16 | 163.09 | 156.99 | 4.13 | 6.18 | 6.42 |
103
+ | 16, 64 | 966.42 | 244.89 | 163.26 | 148.23 | 3.95 | 5.92 | 6.52 |
104
+ | 16, 256 | 827.37 | 244.84 | 163.23 | 121.85 | 3.38 | 5.07 | 6.79 |
105
+ | 16, 1024 | 536.73 | 209.13 | 169.30 | 71.57 | 2.57 | 3.17 | 7.50 |
106
+ | 16, 2048 | 375.31 | 153.95 | 158.77 | 45.97 | 2.44 | 2.36 | 8.16 |
107
+ | 16, 3840 | 243.66 | OOM | OOM | 28.33 | | | 8.60 |
108
 
109
 
110
  | Batch Size, Sequence Length | ONNX RT FP16 | PyTorch Eager FP16 | PyTorch Compile FP16 | Llama.cpp | FP16 SpeedUp ORT/PyTorch Eager | FP16 SpeedUp ORT/PyTorch Compile | FP16 SpeedUp ORT/Llama.cpp |
111
  | --- | --- | --- | --- | --- | --- | --- | --- |
112
+ | 1, 16 | 137.30 | 26.02 | 26.83 | 125.86 | 5.28 | 5.12 | 1.09 |
113
+ | 1, 64 | 135.79 | 26.01 | 26.48 | 125.75 | 5.22 | 5.13 | 1.08 |
114
+ | 1, 256 | 127.92 | 26.17 | 26.61 | 125.24 | 4.89 | 4.81 | 1.02 |
115
+ | 1, 1024 | 114.08 | 26.11 | 26.63 | 117.97 | 4.37 | 4.28 | 0.97 |
116
+ | 1, 2048 | 101.68 | 17.77 | 21.05 | 111.08 | 5.72 | 4.83 | 0.92 |
117
+ | 1, 3840 | 84.94 | 25.17 | 26.77 | 104.88 | 3.37 | 3.17 | 0.81 |
118
+ | 4, 16 | 529.07 | 99.47 | 100.22 | 124.63 | 5.32 | 5.28 | 4.25 |
119
+ | 4, 64 | 513.85 | 99.47 | 100.54 | 123.20 | 5.17 | 5.11 | 4.17 |
120
+ | 4, 256 | 466.56 | 99.21 | 100.22 | 117.61 | 4.70 | 4.66 | 3.97 |
121
+ | 4, 1024 | 352.06 | 99.56 | 100.50 | 100.42 | 3.54 | 3.50 | 3.51 |
122
+ | 4, 2048 | 271.02 | 70.12 | 73.66 | 83.95 | 3.86 | 3.68 | 3.23 |
123
+ | 4, 3840 | 191.36 | 74.35 | 79.68 | 65.51 | 2.57 | 2.40 | 2.92 |
124
+ | 8, 16 | 936.46 | 198.99 | 212.40 | 120.24 | 4.71 | 4.41 | 7.79 |
125
+ | 8, 64 | 926.83 | 200.28 | 213.97 | 117.77 | 4.63 | 4.33 | 7.87 |
126
+ | 8, 256 | 783.95 | 200.66 | 214.88 | 108.33 | 3.91 | 3.65 | 7.24 |
127
+ | 8, 1024 | 511.96 | 183.10 | 201.01 | 82.52 | 2.80 | 2.55 | 6.20 |
128
+ | 8, 2048 | 352.86 | 96.99 | 122.10 | 62.41 | 3.64 | 2.89 | 5.65 |
129
+ | 8, 3840 | 228.97 | 96.81 | 101.60 | 43.89 | 2.37 | 2.25 | 5.22 |
130
+ | 16, 16 | 1675.72 | 396.52 | 422.13 | 112.78 | 4.23 | 3.97 | 14.86 |
131
+ | 16, 64 | 1591.61 | 395.66 | 422.47 | 108.36 | 4.02 | 3.77 | 14.69 |
132
+ | 16, 256 | 1249.94 | 399.30 | 429.10 | 93.68 | 3.13 | 2.91 | 13.34 |
133
+ | 16, 1024 | 685.63 | 270.99 | 292.24 | 60.66 | 2.53 | 2.35 | 11.30 |
134
+ | 16, 2048 | 441.15 | 121.17 | 162.93 | 41.30 | 3.64 | 2.71 | 10.68 |
135
+ | 16, 3840 | 270.38 | OOM | OOM | 26.50 | 0.00 | 0.00 | 10.20 |
136
 
137
 
138
  The table below shows the average throughput of the first 256 tokens generated (tps) for INT4 precision on CPU as measured on a Standard D16s v6 (16 vcpus, 64 GiB memory)
139
 
140
  | Batch Size, Sequence Length | ORT INT4 AWQ | Llama.cpp INT4 | INT4 AWQ SpeedUp Llama.cpp |
141
  | --- | --- | --- | --- |
142
+ | 1, 16 | 41.99 | 26.72 | 1.57 |
143
+ | 1, 64 | 41.81 | 26.67 | 1.57 |
144
+ | 1, 256 | 41.26 | 26.30 | 1.57 |
145
+ | 1, 1024 | 37.15 | 24.02 | 1.55 |
146
+ | 1, 2048 | 32.68 | 21.82 | 1.50 |
147
 
148
 
149
  ## Package Versions