fix: #1121 expose model request IDs on raw responses#2552
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7b550bc4c2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ba70901545
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7911216d75
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
This pull request fixes #1121 missing request ID propagation for model responses so callers can inspect
result.raw_responses[*].request_idafter OpenAI Responses runs, including HTTP streaming runs.It updates
ModelResponseto carry an optionalrequest_id, propagates the OpenAI SDK_request_idfield through the non-streaming and HTTP streaming Responses paths, and persists that data throughRunStateserialization. The streaming implementation keeps compatibility with custom clients and test doubles that only implementresponses.create()by falling back to the previous path whenwith_streaming_responseis unavailable.This change intentionally does not add a run-level
last_request_idconvenience API, since a single agent run may include multiple model calls andraw_responsesalready exposes each call individually. It also does not synthesize success request IDs for the websocket transport path because that metadata is not exposed the same way there.