-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Tcp big 6977 v1 #14321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Tcp big 6977 v1 #14321
Conversation
Ticket: 6977 When we use max data per packet, we may timeout the flow with unprocessed data. In this case, process a chunk with a peudo packet, and do not erase the flow, until all data has been consumed...
src/stream-tcp-reassemble.c
Outdated
| *data_len = 0; | ||
| } | ||
| } | ||
| if (*data_len > UINT16_MAX) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the core of the solution !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not there in the PR now. Is that on purpose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it is still there, but the condition changed to be configurable
|
ERROR: ERROR: QA failed on ASAN_TLPR1_suri. Pipeline = 28454 |
|
I think this logic would have to apply to "raw" content as well, as that is what the detection engine uses for rules with just I guess I also fear we're just pushing the problem to the end of the flow. If we have a large blob of data getting ready for processing in one step, does it really matter if it is "real time" in the packet path or at flow timeout? Both will block the worker thread for other packets. I don't have an alternative solution to offer at this point, but I am a bit skeptical here. |
I am not sure I understand what you say, but this draft is supposed to solve this problem : |
Link to ticket: https://redmine.openinfosecfoundation.org/issues/
https://redmine.openinfosecfoundation.org/issues/6977
Describe changes:
DRAFT :
Is this the right solution design ?
( I bet third and last commit's code is wrong )
I think so: let's consider the following case with an established flow, client is at sequence 1000
Proposed solution here is to dilute the work by only processing 65Kbytes or so at a time (per packet), and thus to have multiple pseudo-packets ending the flow if needed
Another solution could be to limit the window we accept (and set an event if a packet is outside our window but inside the flow's window) but Suricata processes less data this way...