Skip to content

Conversation

@jasonish
Copy link
Member

Non-urgent, just been sitting on this, and a QA run would be nice.

Replace global thread-local packet pools with packet pools owned by thread vars.

The idea is to remove global data such that a library user can have multiple instances of the Suricata engine that don't share any data. As library users can provide their own threads, and may utilize different threading infrastructure such as thread pools, we should do our best not to leak resources from one instance into another.

The downside is that we might lose some optimizations that thread-local storage can provide, such as a faster path to returning a packet to another thread's pool.

This is part of the overall idea that multiple instances of the Suricata engine should be able to co-exist in one process space without cross-contamination. This generally means getting rid of all global state. The packet pool was relatively easy to attack, with the drawback of losing an optimization. It's also a performance-critical module as well, so it's a good one to look at.

Other thoughts:

  • We could just force library users into our threading model. But that seems short-sighted, given the move to work-stealing thread pools by most modern programming frameworks
  • I think it's autofp that only really made use of this batching that we lost here
  • Would pluggable packet pools make sense as a goal?

TODO:

  • If it is agreed that removing these thread-local globals is best for the long-term design, create a tracking ticket and sub-tasks.

Replace global thread local packet pools with packet pools owned by
thread vars.

The idea is to remove global data such that a library user can have
multiple instances of the Suricata engine that don't share any
data. As library users can provide their own threads, and may utilize
different threading infrastucture such as thread pools, we should do
our best not to leak resources from one instance into another.

The downside is that we might lose some optimizations that thread
local storage can provide, such as as a faster path to returning a
packet to another threads pool.
@codecov
Copy link

codecov bot commented Nov 1, 2025

Codecov Report

❌ Patch coverage is 95.88235% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 84.16%. Comparing base (8542017) to head (0c1b6ed).

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #14236      +/-   ##
==========================================
- Coverage   84.17%   84.16%   -0.01%     
==========================================
  Files        1013     1013              
  Lines      262327   262413      +86     
==========================================
+ Hits       220809   220867      +58     
- Misses      41518    41546      +28     
Flag Coverage Δ
fuzzcorpus 63.37% <51.47%> (+0.02%) ⬆️
livemode 18.84% <75.71%> (+0.07%) ⬆️
pcap 44.60% <83.82%> (+0.01%) ⬆️
suricata-verify 64.85% <82.35%> (-0.04%) ⬇️
unittests 59.21% <80.95%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@victorjulien
Copy link
Member

Passed my QA. Ran this PR with SV master. Local pipeline 5829, run 1087.

@victorjulien victorjulien self-assigned this Nov 1, 2025
@suricata-qa
Copy link

Information: QA ran without warnings.

Pipeline = 28205

@inashivb
Copy link
Member

Would pluggable packet pools make sense as a goal?

Could you please tell what would that entail? Would a library user have to mandatorily set up packet pools then before being able to get to the rest of Suricata?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

4 participants