Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix][Frontend] Fixed issue where requests with duplicate request IDs might be sent to EngineCore simultaneously #15326

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

hidva
Copy link
Contributor

@hidva hidva commented Mar 22, 2025

Currently, vllm allows users to send duplicate request IDs. At the same time, numerous modules in EngineCore use request IDs as dictionary keys, such as KVCacheManager.req_to_blocks. This is based on the assumption that EngineCore always expects the Frontend to first abort a request before adding a new one with the same request ID:

# req1, req2 have the same request_id.
(EngineCoreRequestType.ADD, req1(request_id=RequestId))
(EngineCoreRequestType.ABORT, req1)
(EngineCoreRequestType.ADD, req2(request_id=RequestId))

Currently, AsyncLLM ensures that duplicate request IDs must first be aborted before they can be added through the sequence AsyncLLM._add_request -> OutputProcessor.add_request:

# OutputProcessor.add_request
request_id = request.request_id
if request_id in self.request_states:
    raise ValueError(f"Request id {request_id} already running.")

# AsyncLLM.abort
async def abort(self, request_id: str) -> None:
    """Abort RequestId in OutputProcessor and EngineCore."""

    request_ids = self.output_processor.abort_requests((request_id,))
    # BUG!
    # This operation is not atomic, and there might be a time window during which
    # the request has already been removed from OutputProcessor.request_states,
    # but the corresponding ABORT has not yet been issued to EngineCore.
    await self.engine_core.abort_requests_async(request_ids)

    if self.log_requests:
        logger.info("Aborted request %s.", request_id)

We can easily simulate the potential bug by enlarging the possible time window with an await asyncio.sleep(13) inserted at the BUG point:
image

To fix this issue, we categorized completed requests into two types:

  • abort req, handle_abort_reqs
  • finished req, _handle_finished_reqs

And ensured that the scope of request visibility in the Frontend always includes the scope of request visibility in EngineCore.

…IDs might be sent to EngineCore simultaneously

Signed-off-by: 盏一 <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Mar 22, 2025
@@ -373,7 +393,6 @@ def _update_stats_from_finished(self, req_state: RequestState,
num_prompt_tokens=len(req_state.prompt_token_ids),
max_tokens_param=req_state.max_tokens_param,
req_stats=req_state.stats)
self.lora_states.finish_request(req_state)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment

Copy link
Collaborator

@robertgshaw2-redhat robertgshaw2-redhat Mar 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I agree this can cause leaks if metrics is disabled

@robertgshaw2-redhat
Copy link
Collaborator

Thanks for your contribution! I agree that this is a race condition. Appreciate you digging in

self.handle_abort_reqs(request_ids_to_abort)
return request_ids_to_abort

def flatten_req_to_abort(self, req_ids: Iterable[str]) -> list[str]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we call this something more descriptive? get_parent_and_children_reqs?

ret.extend(parent.child_requests)
return ret

# "Aborted request", meaning the frontend first detects that
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a docstring rather than a comment.

# "Finished request", meaning EngineCore first detects that
# the request has ended, and the resources related to the request
# maintained by EngineCore have been released.
def _handle_finished_reqs(self, req_id):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets call this def finish_request(self, request_id: str) -> None

put the RequestOutput objects into the queue for
handling by the per-request generate() tasks.

* If there is no queue (for usage with LLMEngine),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a comment to the docstring about why we finish the stop string requests externally to this function?

await self.engine_core.abort_requests_async(request_ids)
# At this point, the abort message has already been sent to EngineCore,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you update this comment to explain why this ordering is important for the race condition?

@robertgshaw2-redhat
Copy link
Collaborator

Thanks a ton! I reviewed the implementation in detail and you have fixed the problem! Just left some minor comments about naming the functions and comments. Ping me on slack when this is ready!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants