-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
check issue with nginx abour chunked
header
#1971
Comments
The Transfer-Encoding header is a hop-by-hop header, therefore it is not a bug if the upstreams don't receive this header |
Does this mean that even if the client carries the Transfer-Encoding request header, the upsteam may not receive it? This is fine in some cases, but when the client uses the Transfer-Encoding request header as a signature field, the request received by the upstream does not have this request header, which means the signature does not match. I am not sure whether the connection between nginx and the upstream in this request also uses Transfer-Encoding ? |
Upon reviewing the Nginx code, I found that when the client uses Transfer-Encoding, Nginx will use Transfer-Encoding: chunked for communication with the client. However, when communicating with the upstream server, it defaults to a non-chunked transfer when there is a request body. If there is no request body, it does not process the Transfer-Encoding header and forwards it to the server as is |
Your opinion is consistent with the results of my test. I think this is an issue because upstreams need to know whether the client passes the |
The nginx community has not fixed this problem. Is it because nginx cannot proxy requests in the chunked way during the interaction with the upstream? For example, nginx cannot ensure that the size of each chunk on the client is consistent with the size of the proxy to the upstream. |
You can try uploading large files; I found in the Nginx code that when uploading large files, it is possible to send them in chunked format to the upstream server. |
I test that nginx proxy with It agrees with what you said. But I don't find the code, can you tell me ? @echo-97 |
Besides that, I found a bug in the python requests library when set |
Okay, in the function ngx_http_read_client_request_body, when the request body has not been completely received and the current TCP connection is idle, it is possible to set r->reading_body to 1. In the function ngx_http_proxy_create_request:
} |
I modified the code and simply tested it and it works
|
nginx/nginx#316
The text was updated successfully, but these errors were encountered: