Post
844
Nice paper comparing the fp8 inference efficiency of Nvidia H100 and Intel Gaudi2:
An Investigation of FP8 Across Accelerators for LLM Inference (2502.01070)
The conclusion is interesting: "Our findings highlight that the Gaudi 2, by leveraging FP8, achieves higher throughput-to-power efficiency during LLM inference"
One aspect of AI hardware accelerators that is often overlooked is how they consume less energy than GPUs. It's nice to see researchers starting carrying out experiments to measure this!
Gaudi3 results soon...
The conclusion is interesting: "Our findings highlight that the Gaudi 2, by leveraging FP8, achieves higher throughput-to-power efficiency during LLM inference"
One aspect of AI hardware accelerators that is often overlooked is how they consume less energy than GPUs. It's nice to see researchers starting carrying out experiments to measure this!
Gaudi3 results soon...