-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to get logprobs #20
Comments
I second this if we get some api like the above we can look at creating equivalent tools like guidanceai. |
Can someone explain what the "basic guidance" refers to? I understand the ideas behind the other examples mentioned (confidence levels, collecting multiple branches of output more efficiently, custom token heuristics instead of the built-in temperature/topK) but not the basic guidance one. I also wonder if/how exposing logprobs might further complicate the interoperability aspect. |
Basic guidance would be an inefficient way to force valid JSON output etc similar to how https://github.com/guidance-ai/guidance does it for closed APIs like OpenAI. It's closely related to custom token control. (Inefficient because it requires round trips unlike a native guidance solution.) |
I was looking for this exact feature and couldn't find anything — an absolute must for me! (For the sake of clarity, I'm not interested in that for guidance purposes as others have mentioned. I need to have access to the logprobs only). In my case, it would also be helpful to get the logprobs of any given text, not just its completion tokens E.g.: "The cat sat" -> |
The current API is great for producing a text response, but if we could provide an option that gave us the logprobs for each streamed token, we'd be able to implement a lot more functionality on top of the model such as basic guidance, estimating confidence levels, collecting multiple branches of output more efficiently, custom token heuristics instead of the built-in temperature/topK (I saw there was another proposal to add a seed option, but this would let you build that yourself), and more.
Basically it could be modeled from something like the
top_logprobs
parameter that the OpenAI API has which would return something like this fortop_logprobs=2
:The text was updated successfully, but these errors were encountered: