-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor AICI to use WebAssembly Component Model #84
base: main
Are you sure you want to change the base?
Conversation
@mmoskal There are still some a few TODOs here I believe:
|
PR updated with some refactors, among which is an ergonomic improvement to exporting guests that fixes running It seemed to me that until the WASI Preview 2 target fully lands, the controllers may need to be built as libraries with type The improved export macro hides the machinery: aici/controllers/aici_abi/src/lib.rs Lines 20 to 27 in ce2397a
|
Hi @AaronFriel, I LOOOOOVVVVEEEEE this. My team does a bunch of the infrastructure work upstream supporting wasm components, and I'd like to see how to help bring this in to the project. 🖖 |
Thanks very much for this PR, @AaronFriel ! And thanks @squillace for helping review! |
@AaronFriel don't fret, we'll get here. You submitted when we had KubeCon followed by easter followed by the heavens being swallowed by the moon. People are returning end of this week.... |
@squillace I'm in no rush, and pleased to see your review when you're able! Sorry if I did anything to nag you - I don't think I triggered anything on my end since posting the PR? |
nope, just don't like not communicating n prs when someone is trying to help do the right thing. |
This looks great, from my non-very-well-informed POV! Unfortunately, I'm in the middle of some work items that may affect this. In particular, I'm dropping the pre/post callbacks and only leaving the mid callback. It looks like we would be unable to run the pre/post fast enough, especially with speculative decoding (I have not considered that in the past). I also want to support native controllers which is probably relevant here. This may take a few weeks to finish and is quite high priority for us here. |
The 1 token penalty in #68 seems very reasonable for the capabilities offered in AICI. I'm not intimately familiar with the workings of the rLLM implementations, beyond what was necessary for this PR, but from your notes it sounds like blocking the LLM holds up an entire batch, effectively a pipeline stall? |
In non-speculative implementations, the pre/post happens on the "critical path" of sampling after we got the logits from the GPU but before we started the next step on the GPU (the next step needs the sampled tokens from the current step). Thus, the GPU sits idle while we hold the entire batch. Now, in principle it would be possible to be working on two sets of sequences and swap them (running pre/post on one, while the other is already computing logits on t he GPU and vice versa). The problem with this is that it needs 2x memory for KV cache which is typically the limiting factor for batch size and thus throughput. It may be possible for the draft model though in case it's too fast even with the new only-mid strategy. |
Does vLLM have a mechanism for adding and removing sequences from batches, or would it be simpler in AICI to effectively That is, never allow AICI to block the LLM, but you might generate logits you throw away. In those situations, backtrack and resume? Taking this to #68 though, because it sounds like this PR is blocked on understanding that discussion. |
This is a significant change to the AICI Runtime (host) and AICI Controller (guest) to use WASI components. As part of this change, a significant amount of unsafe code is removed and the protocol is simplified to remove the need for "BLOB" types and side-channels. The protocol is documented in `wit/controller.wit`, and a minimal WASI runtime is provided to AI CI controllers. Some notes: * The AICI runtime now longer directly reads and writes to the guest's memory. Instead, the guest provides a `Runner` resource (using WebAssembly Component terminology), which exposes the low-level protocol to the host as a constructor and trait with methods. * The Blob protocols are removed entirely, replaced by the `Runner` resource. This and other side-channels for communicating with the runtime, e.g. allowed tokens (logit biases) outside of `MidProcessResult`, are removed. * The (Variable) Storage and Tokenizer protocols are separate WebAssembly Components, which can be versioned independently of the runtime. * Types are changed to be consistent with the WebAssembly interface, e.g.: `SeqId` is used in far more places to avoid casts.
ce2397a
to
c9dc62d
Compare
@mmoskal I'm most of the way through rebasing on the latest changes, though I think it'd be good for us to chat some time (I'll follow up on our email) about whether this would be acceptable in the near term. Personally, I'm very excited about this because I want to explore alternatives to the current API design(s) for LLMs and more powerful protocols than the request-response pattern modern LLMs have. Sadly, it is a lot of work to rebase on the frequent protocol changes in this repo, and I haven't been able to make much progress. I would like to propose and discuss a couple things:
|
@AaronFriel this is amazing work! I wanna tag @yoshuawuyts here for component and rust expertise and @devigned to track the work for usage. I'd LOOOOOOOVVVEEE to try this out. |
75b6c64
to
efa2f9b
Compare
efa2f9b
to
ad72b37
Compare
OK, updated the PR to check off all of the TODOs. SPECTRE mitigation via a bounded monotonic clock: Lines 20 to 49 in ad72b37
And logging is wired up again to an in memory output pipe with bounded capacity: Lines 66 to 88 in ad72b37
With these changes, this works as expected: $ ./server.sh --trace-rt phi2 And $ ./aici.sh build controllers/pyctrl --tag pyctrl
...
$ ./aici.sh run ./scripts/list-of-five.py --ctrl pyctrl
[0]: FIXED "What‧ are‧ the‧ most‧ popular‧ types‧ of‧ vehicles‧?‧\n"
[0]: FIXED "1‧."
[0]:
[0]:
[0]: GEN: " Cars‧\n"
[0]: FIXED "2‧."
[0]:
[0]:
[0]:
[0]: GEN: " B‧uses‧\n"
[0]: FIXED "3‧."
[0]:
[0]:
[0]:
[0]: GEN: " Motor‧cycles‧\n"
[0]: FIXED "4‧."
[0]:
[0]:
[0]:
[0]: GEN: " Tru‧cks‧\n"
[0]: FIXED "5‧."
[0]:
[0]:
[0]:
[0]:
[0]: GEN: " B‧icy‧cles‧\n"
[0]: FIXED "\n"
[0]:
[0]:
[DONE]
[Response] What are the most popular types of vehicles?
1. Cars
2. Buses
3. Motorcycles
4. Trucks
5. Bicycles
response saved to tmp/response.json
Usage: {'sampled_tokens': 22, 'ff_tokens': 37, 'cost': 81}
Timing: {'http_response': 0.07584023475646973, 'data0': 0.07589316368103027, 'first_token': 0.12601089477539062, 'last_token': 1.248687505722046}
Tokens/sec: {'prompt': 19.93196819860192, 'sampling': 17.618499343659753}
Storage: {'result': '1. Cars\n2. Buses\n3. Motorcycles\n4. Trucks\n5. Bicycles\n\n'} |
Performance, at least for this example, looks to be equal or better per Components branch:
Main branch:
With |
I really can't wait to try this. |
This is great! I love the conciseness of the interface description. I have not had much time to work on AICI lately, focusing on the specific llguidance controller (which is mostly being run natively, but with similar interface). Just as a general heads up - the problem with run into with AICI in production is the case where there are more sequences in batch (and thus parallel controller processes) than cores. This is because I spin for a while on futexes (to minimize latency), and this kills performance when we're out of cores. This would need to be fixed somehow. The latency minimization was mostly there when we still had post/pre_process(); for mid_process() it shouldn't matter that much. |
I wonder if the streaming protocol of WASI helps here - instead of using a futex using IPC with efficient reading and writing to shared circular buffers? |
This is a significant change to the AICI Runtime (host) and AICI Controller (guest) to use WASI components. As part of this change, a significant amount of unsafe code is removed and the protocol is simplified to remove the need for "BLOB" types and side-channels.
The protocol is documented in
wit/controller.wit
, and a minimal WASI runtime is provided to AI CI controllers.Some notes:
Runner
resource (using WebAssembly Component terminology), which exposes the low-level protocol to the host as a constructor and trait with methods.Runner
resource. This and other side-channels for communicating with the runtime, e.g. allowed tokens (logit biases) outside ofMidProcessResult
, are removed.SeqId
is used in far more places to avoid casts.