Papers
arxiv:2406.14319

LiveMind: Low-latency Large Language Models with Simultaneous Inference

Published on Jun 20
· Submitted by ChuangtaoChen-TUM on Jun 21
Authors:
,
,
,
,

Abstract

In this paper, we introduce a novel low-latency inference framework for large language models (LLMs) inference which enables LLMs to perform inferences with incomplete prompts. By reallocating computational processes to prompt input phase, we achieve a substantial reduction in latency, thereby significantly enhancing the interactive experience for users of LLMs. The framework adeptly manages the visibility of the streaming prompt to the model, allowing it to infer from incomplete prompts or await additional prompts. Compared with traditional inference methods that utilize complete prompts, our approach demonstrates an average reduction of 59% in response latency on the MMLU-Pro dataset, while maintaining comparable accuracy. Additionally, our framework facilitates collaborative inference and output across different models. By employing an LLM for inference and a small language model (SLM) for output, we achieve an average 68% reduction in response latency, alongside a 5.5% improvement in accuracy on the MMLU-Pro dataset compared with the SLM baseline. For long prompts exceeding 20 sentences, the response latency can be reduced by up to 93%.

Community

Paper author Paper submitter
edited 8 days ago

LiveMind: Low-latency Large Language Models with Simultaneous Inference: https://arxiv.org/abs/2406.14319

·

Congrats on the paper!! Do you have any plans to release the model and demo on the hub?

Paper author Paper submitter
edited 3 days ago

This is a Demo with gradio of conventional Chain-of-Thought inference (left) and LiveMind simultanous inference (right) with streaming input. To run the demo, please visit our GitHub page: https://github.com/ChuangtaoChen-TUM/LiveMind

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.14319 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.14319 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.14319 in a Space README.md to link it from this page.

Collections including this paper 2