Efficient Request Queueing โ Optimizing LLM Performance
Serving LLMs to many applications and users in parallel is challenging because they compete for limited GPU resources. This article is the first in a series on LLM performance, based on our experience with serving self-hosted LLMs at TNG Technology Consulting GmbH. In the first part, we focus on the impact of queuing and discuss different scheduling strategies.
Starting Point: A Bare Inference Engine
An inference engine like vLLM or HuggingFace TGI consists of
- a worker that does the actual work of calculating the next token in a request
- a queue to which requests are added when they first arrive
- a scheduler that takes requests from the queue and moves them to the worker
Why do we need a queue here? Because calculations on the GPU are more performant and resource-efficient when they are done batch-wise instead of isolated for individual requests. This backend queue allows the scheduler to pick multiple requests and put them on the same batch to be processed.
Note that typically each inference engine serves only a single model, and we have multiple deployments running for different models in parallel.
Problem: "Power Users" Can Block Other Users
When a single user "A" sends a large number of requests, they will quickly fill up the queue. Other users ("B" and "C") that send their requests only shortly after are blocked from using the same model (until all requests from "A" have been processed). Note that the picture focuses on vLLM as inference engine, but the problem is more general and applies to other backends as well.
Solution: Fair Scheduling
At TNG, users don't send requests directly to the vLLM backend but to an API Server (what we call "LLM-Server"). Here we can have separate queues for each user (and model), and a scheduler that is not FIFO (first in - first out) but goes round-robin through all user queues. This achieves some "fair scheduling": for example, in the diagram users B and C send their request a bit later, when user A's first request has already been scheduled. At that time, three requests from user A had been waiting already for some time, but user C has to wait for only one request from user A to be completed.
The key idea is: prioritize requests from different users in our own component and not in the inference backend! Typically, you can't change the order of requests once they have been sent to the inference engine, so you have to bring them in the right order while they are still in the LLM-Server, where we have full control.
Possible Extensions
You can consider different aspects in order to decide what "fair" scheduling is. In the above example, any new user with a single request should be served before any user is served two or more times in a row. This is "fair" on a number-of-requests level. But you could also look at processing time: how long does a request keep the backend busy? Long prompts and long generations will "block" the LLM for other users for a longer time, so maybe shorter requests should have precedence? Unfortunately, it is very difficult to estimate the generation length. While there are some requests with a "max_tokens" limit, a typical chat message from an interactive AI assistant has no token limit, and can vary between a very short generation ("summarize this text") and a very long one ("tell me a story" / "write all code for xyz").
Depending on the prompts, there can be some benefit in arranging requests based on similarity, so that vLLM can maximize cache hits, which speeds up the performance. This kind of KV-cache-aware routing has recently gained attention by frameworks like NVIDIA Dynamo and AIBrix.
In a business context, and for hosted LLMs, the cost of individual requests can be another metric to consider, but comes with similar challenges.
The solution can also be extended by having not only one queue per user (and model) but several queues with different priorities. For example, interactive applications like TNG's AI Assistant with chat interface should have higher priority because users who don't see any progress within five seconds will think the application is broken. Users who generate code reviews for tens of files and several thousand lines of code, however, will expect the LLM requests to take some time. And some use cases (like benchmark runs, scheduled via a batch API) should have such low priority that they don't disturb other use cases and are just scheduled when nothing else is running.
Problem: No Backpressure by Backend Queue
Consider again the scenario that one user A sends a lot of requests at once, and some time later a new user C joins and wants to schedule a single request. If the scheduler on LLM-Server side sent every request (according to the fair prioritization) immediately to the backend, all requests from A would accumulate again in the FIFO queue. The new user C would have to wait again until all previously received requests have been processed. There would be almost no improvement to the initial scenario without LLM-Server.
Ideally, you could limit the number of maximum elements in the backend queue, but vLLM doesn't give you that option. Therefore, we have to dynamically adjust the rate at which the scheduler on LLM-Server side sends new requests to the backend. Here, our goal is: keep the backend queue length short in order to minimize latencies experienced by new users.
(The simplest approach would be a static rate limit, but this would likely result in underutilization when most requests are short and hard to calibrate for different models and load patterns).
Solution: Fetch Metrics
In order to make the backend queue length available in the LLM-Server, we need to fetch the respective Prometheus metrics from the vLLM /metrics endpoint. Our fair scheduler is only allowed to send requests to the backend as long as the backend queue length metric is smaller than three, for example. This target length for the backend queue can be lowered for an even shorter latency for new users, until it results in under-utilized batches and a reduced efficiency - there is a trade-off. Keep in mind, though, that the target queue length does NOT reflect maximum concurrency; there can still be more than three requests being processed in parallel; vLLM will add queuing requests to the current batch as soon as there is sufficient space ("continuous batching").
Possible Extensions
Once you have the feedback loop between backend queue and scheduler in the LLM-Server, you can easily extend the set of vLLM metrics used for making scheduling decisions. For example, for a good user experience in TNG's interactive AI Assistant we aim for a high token generation speed (e.g., >7 tokens/s, that is ~150ms per token), and if the reported time-per-output-token metric rises above 150ms, no new requests will be scheduled.
You can also program different metric thresholds for different request priorities. For example, for low-priority requests from the batch API we only schedule a request when the backend queue is completely empty: we rather risk short periods of underutilized GPUs than causing increased latencies for any later request with higher priority.
There is quite some potential for optimizations: for example, if you allow scheduling for a backend length shorter than three and the current metric is zero, you can immediately send three requests to the backend before having to fetch the metrics again. In the worst case, none of them fit into the current batch and all of them are in the backend queue (in which case the new length is three, just fine).
Alternative: Backend-Side Priority Scheduling
Recently, vLLM added a priority-based scheduling option instead of sticking with FIFO. For this new feature, requests are tagged with a priority before they are sent to the backend. And vLLM would regularly check if any higher-priority request is queuing and then sort all requests by priority (both the running batch and the waiting queue). This would not only let the high-priority request literally jump the queue but even move it directly to the processed batch. The price is (aside from some overhead due to sorting) that lower priority requests would be evicted from the running batch and put back in the waiting queue again.
Can Backend-Side Priority Scheduling Replace All Queues on the LLM-Server Side?
vLLM understands the request priority as a continuous number. This does not only allow you to distinguish between default, high priority (interactive AI assistant) and low priority (batch API) but you could even make smaller decrements for every request that the user has already sent to the backend and where the response is pending. For example, users B and C send only single requests, which will have priority zero. User A sends four requests at once; the first one has priority zero, too, but the next ones will have priorities one, two, and three, and will be processed later (higher number = lower priority).
There are still some caveats:
- The backend-priority feature is only available for vLLM, not for HuggingFace TGI.
- Having a scheduler on LLM-Server side allows you to adjust the scheduling rate based on time-per-output-token. The backend-priority scheduling does not control the scheduling rate.
- The impact of frequent re-ordering of queue and batch on latencies still needs to be measured for realistic load scenarios.
Overall, backend-side priority scheduling can be a good strategy for vLLM-based systems, because it simplifies queueing logic on the upstream LLM-Server. Unfortunately, in a realistic setting you cannot get rid of the LLM-Server as an additional scheduling layer, since you need some objective instance to assign priorities to individual requests.
Summary & Outlook
Queueing and scheduling are crucial for applications with multiple users and clients of varying priorities, as they significantly impact the user experience. This is especially true in scenarios where multiple clients submit requests in parallel, benefiting from a dedicated "fair scheduling" strategy. Although backend features like priority scheduling can simplify optimization, an upstream gateway server is still necessary to fully manage these complexities.
In the next part of this blogpost series, we will shift gears and focus on the token generation in the inference engine during prefill and decode phases. In particular, we will discuss resource utilization and strategies for concurrent processing of multiple requests.