-
Notifications
You must be signed in to change notification settings - Fork 296
Streaming ctable #879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streaming ctable #879
Conversation
In anticipation of adding support for streaming lookups to CTable, move binary search implementation to core, and update users.
In preparation for supporting streaming lookup in CTable, move the specialized parallel copy routine to core.
It was all a misunderstanding, you see.
I documented the streaming lookup code, such as it is. Probably it needs to be better though!
Yay :) This PR includes the changes from multiple other PRs, should it? |
As for the documentation, I had a read and find it quite ok actually - don't get why you say it's so terrible. |
As far as "depending on", I mean that it depends on the review of those patches. I think those other PRs can be reviewed on their own and once they are merged I can merge this one. As far as interfaces go, I would like to see if I can incorporate the packet queue, the enqueue routine, and the flush routine into the lookup streamer itself. Perhaps then we rename it to LookupQueue or something. The user would instantiate the queue with a queue size and some kind of handler procedure: local function have_result(app, pkt, entry)
if entry then
--- process packet etc
app:process(pkt, entry.value)
else
--- drop packet etc
app:drop(pkt)
end
end
local queue = ctab:make_lookup_queue(32, have_result, app)
...
local function push(app, pkt)
while not link.isempty(app.input)
local pkt = link.receive(app.input)
app.queue:enqueue(pkt, get_key(pkt))
end
-- flush any remaining enqueued packets
app.queue:flush()
end Dunno! |
One related programming pattern that keeps popping into my head is to process packets stepwise, like (loose example): -- Receive packets
for i = 0, max do
packets[i] = link.input(in)
end
-- Lookup state records based on packet fields (could be streaming etc here)
for i = 0, max do
state[i] = table_lookup(packets[i])
end
-- Classify forward vs drop
for i = 0, max do
if should_forward(packets[i], state[i]) then
table.insert(forward, i)
else
table.insert(drop, i)
end
end
-- Execute forwardings
for i = 0, #forward do
forward_packet(packets[forward[i]])
end
-- Execute drops
for i = 0, #drop do
drop_packet(packets[drop[i]])
end I see this as having a few potential advantages that admittedly need to be fleshed out:
This is a bit inspired by @alexandergall's learning bridge code which in turns is probably influenced by his examination of JIT behavior. I am developing the timeline tooling a bit in the background, once I have a demo that may help to motivate this kind of structure. |
Yeah! I'm down with that in general. The funny thing is, we already have an array of packets to work on in a batch: the link. Perhaps we should work directly there? Perhaps in slices? It would certainly simplify the interface to streaming lookup if we could assume that the user is maintaining a parallel array of packets or other data corresponding to the lookup entries. Right now the lwaftr code is structured around the illusion of processing packets one-by-one, when in reality there is a little queue in the middle. Perhaps it would be best to explicitly partition the stages as you have done above. |
I'm currently not merging these, as per out-of-band discussion with @wingo . |
I want to merge this PR. It applies directly (at least it merged without conflicts locally into a branch) and passes the real test (snabb-nfv-test-vanilla) above. Regarding potential refactors into a different way of looking things up in #879 (comment) -- I think we should punt until a bit later. This change will allow us to ditch podhashmap so that we're completely on ctable, something I need already to be able to migrate the lwaftr to work with the yang configuration stuff. I still think it's a good idea that we should explore when not so many things are up in the air. |
This merges in snabbco#879.
What happened to this one? Where did we end up? Something blocking merge or are we just waiting for the next release to come along? |
This branch is merged into |
The documentation changes indicate how terrible this interface is. Probably we need to improve this before landing!