Skip to content

Conversation

wingo
Copy link
Contributor

@wingo wingo commented Apr 18, 2016

The documentation changes indicate how terrible this interface is. Probably we need to improve this before landing!

wingo added 6 commits April 18, 2016 15:56
In anticipation of adding support for streaming lookups to CTable, move
binary search implementation to core, and update users.
In preparation for supporting streaming lookup in CTable, move the
specialized parallel copy routine to core.
It was all a misunderstanding, you see.
I documented the streaming lookup code, such as it is.  Probably it
needs to be better though!
@plajjan
Copy link
Contributor

plajjan commented Apr 18, 2016

Yay :)

This PR includes the changes from multiple other PRs, should it?

@wingo
Copy link
Contributor Author

wingo commented Apr 18, 2016

@plajjan this PR depends on #877 and #878

@plajjan
Copy link
Contributor

plajjan commented Apr 18, 2016

@wingo not really, for example #877 is commit 05f94d7 and 6608e9e which are included in this PR too so it would seem #877 and #878 are not needed if this PR is merged, no?

@plajjan
Copy link
Contributor

plajjan commented Apr 18, 2016

As for the documentation, I had a read and find it quite ok actually - don't get why you say it's so terrible.
I would like to try and use it though - if I can do that then I think the docs are good enough :)

@wingo
Copy link
Contributor Author

wingo commented Apr 18, 2016

As far as "depending on", I mean that it depends on the review of those patches. I think those other PRs can be reviewed on their own and once they are merged I can merge this one.

As far as interfaces go, I would like to see if I can incorporate the packet queue, the enqueue routine, and the flush routine into the lookup streamer itself. Perhaps then we rename it to LookupQueue or something. The user would instantiate the queue with a queue size and some kind of handler procedure:

local function have_result(app, pkt, entry)
   if entry then
      --- process packet etc
      app:process(pkt, entry.value)
   else
      --- drop packet etc
      app:drop(pkt)
   end
end

local queue = ctab:make_lookup_queue(32, have_result, app)

...

local function push(app, pkt)
   while not link.isempty(app.input)
      local pkt = link.receive(app.input)
      app.queue:enqueue(pkt, get_key(pkt))
   end
   -- flush any remaining enqueued packets
   app.queue:flush()
end

Dunno!

@kbara kbara self-assigned this Apr 18, 2016
@lukego
Copy link
Member

lukego commented Apr 20, 2016

One related programming pattern that keeps popping into my head is to process packets stepwise, like (loose example):

-- Receive packets
for i = 0, max do
   packets[i] = link.input(in)
end
-- Lookup state records based on packet fields (could be streaming etc here)
for i = 0, max do
   state[i] = table_lookup(packets[i])
end
-- Classify forward vs drop
for i = 0, max do
   if should_forward(packets[i], state[i]) then
      table.insert(forward, i)
   else
      table.insert(drop, i)
   end
end
-- Execute forwardings
for i = 0, #forward do
   forward_packet(packets[forward[i]])
end
-- Execute drops
for i = 0, #drop do
   drop_packet(packets[drop[i]])
end

I see this as having a few potential advantages that admittedly need to be fleshed out:

  1. Divide-and-conquer processing functions into smaller pieces that can be optimized separately.
  2. Give LuaJIT what it wants: small loops with consistent control flow that it can aggressively optimize.
  3. Timeline (engine: Add timeline events for the breathe cycle #873) could show how many CPU cycles are consumed at each step.
  4. Could also integrate with more advanced profiling methods e.g. to see the exact number of cache accesses in each processing step.

This is a bit inspired by @alexandergall's learning bridge code which in turns is probably influenced by his examination of JIT behavior.

I am developing the timeline tooling a bit in the background, once I have a demo that may help to motivate this kind of structure.

@wingo
Copy link
Contributor Author

wingo commented Apr 20, 2016

Yeah! I'm down with that in general. The funny thing is, we already have an array of packets to work on in a batch: the link. Perhaps we should work directly there? Perhaps in slices?

It would certainly simplify the interface to streaming lookup if we could assume that the user is maintaining a parallel array of packets or other data corresponding to the lookup entries. Right now the lwaftr code is structured around the illusion of processing packets one-by-one, when in reality there is a little queue in the middle. Perhaps it would be best to explicitly partition the stages as you have done above.

@kbara
Copy link
Contributor

kbara commented Jun 22, 2016

@wingo What's your preference for this and #877 and #878: a merge ASAP, or waiting on things like adding int64 support (as you mentioned in #952 ).

@kbara
Copy link
Contributor

kbara commented Jun 27, 2016

I'm currently not merging these, as per out-of-band discussion with @wingo .

@wingo
Copy link
Contributor Author

wingo commented Nov 7, 2016

I want to merge this PR. It applies directly (at least it merged without conflicts locally into a branch) and passes the real test (snabb-nfv-test-vanilla) above.

Regarding potential refactors into a different way of looking things up in #879 (comment) -- I think we should punt until a bit later. This change will allow us to ditch podhashmap so that we're completely on ctable, something I need already to be able to migrate the lwaftr to work with the yang configuration stuff. I still think it's a good idea that we should explore when not so many things are up in the air.

wingo added a commit to Igalia/snabb that referenced this pull request Nov 8, 2016
@plajjan
Copy link
Contributor

plajjan commented Feb 13, 2017

What happened to this one? Where did we end up? Something blocking merge or are we just waiting for the next release to come along?

@wingo
Copy link
Contributor Author

wingo commented Feb 13, 2017

This branch is merged into Igalia:lwaftr. We need to spend some time upstreaming that to SnabbCo:master

@wingo wingo closed this Jun 5, 2017
@wingo wingo deleted the streaming-ctable branch June 5, 2017 13:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants