-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sampling: add K-Shift sampler #10048
base: master
Are you sure you want to change the base?
Conversation
@p-e-w May I ask you for a review and maybe testing even, please? While this sampler is very simple by itself, it has quite a strong effect in practice and can be useful as an additional control measure in creative sampling. I've tested K-Shift with and without XTC, and it looks like they can work together quite nicely - just need to keep in mind how far the first cutout may go. |
I am currently sick and will be off the computer for a few days, but I intend to do a full review of this interesting PR soon. |
Get well! In meantime I will be testing K-Shift further to gather more data on different models (tested Nemo/Mistral Small/Gemma 2 - all behave differently so far). |
@MaggotHATE Hi. Just a doubt. This sampler works like this right? Select top n tokens at the beginning then do greedy decode for each one of them and select the beam with the highest probability? Will this increase the decode time or streaming needs to be disabled? Or alternative beams can be decoded in parallel? I did try to read the code but I am not too much familiar with llamacpp api so, I end up asking. Please pardon my ignorance. |
There are no alternative beams in K-Shift - that would be CoT-decoding, the main subject of this paper. K-Shift is simply choosing the exact path at the start of inference ("Decoding step 0") by cutting out I plan on implementing CoT-decoding in a different sampler, but I imagine it would be quite a bulky solution within llama.cpp. |
I just realized that adding @slaren Is |
I don't know, I am not sure when that was added, but I think it makes sense. What's the downside of resetting the sampler state after each message? I would think that you wouldn't want to apply the repetition penalties etc of the previous message to the next message. cc @ggerganov |
The only downside I see is tracking switches/states within samplers themselves in cases when a sampler should be applied once (like K-Shift, for example). On reset, either the sampler will be applied again, or, without custom reset function, we won't be able to revert the switch without deleting sampler object. |
Removing the However, what the paper is talking about cannot be implemented with a sampler alone. The paper is talking about generating k different sequences for the response, each starting with a different token, and then aggregating the results. That would be interesting to implement in an example as a proof of concept, but as it is, I don't think that this sampler would be useful by itself without the rest of the algorithm. A bonus would be implementing this using multiple sequences to generate all the responses at the same time in parallel. |
Alright, I will revert it back then. In recent tests it was still coherent even with reset. Although, it would be nice to have a way to trigger it once per session. Is it even possible in the current samplers chain implementation?
I've tested it in practice, and it actually works quite well by itself. In a way, it works similarly to XTC, but under more strict conditions. That alone makes K-Shift more compatible with greedy sampling. As for the main method in the paper, it is interesting, but it will likely become another example app with no prospects of being in |
K-Shift is a sampling strategy mentioned in a Chain-of-Thought Reasoning without Prompting paper and is meant to guide models away from the most obvious startup in inference by cutting out a defined amount of tokens once at the start of the dialog. Since all the rest tokens are not being affected by the sampler, the output is still coherent. K-Shift is intended to be used with greedy sampling, and it is claimed to help with steering models more towards reasoning instead of short answers.
Since a recent commit changed how greedy sampling is achieved, this sampler fits in the main sampling queue and can be combined with
top_k = 1
setting. In my experience it helped with getting different reasoning, less cliched starts in creative writing and can even change bias of the model - reducing or inducing refusals.Examples with
Mistral-Nemo-Instruct-2407.q5_k_l
:k = 0
k = 5
k = 14
This sampler is still in testing, but it feels like a good improvement to sampling overall - however, every model might need its own value for
k
. With K-Shift and XTC, greedy sampling might become useful even for creative sampling.