-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Llm operator #1313
base: staging
Are you sure you want to change the base?
feat: Llm operator #1313
Conversation
def exec(self, *args, **kwargs) -> Iterator[Batch]: | ||
child_executor = self.children[0] | ||
for batch in child_executor.exec(**kwargs): | ||
llm_result = self.llm_expr.evaluate(batch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the batch optimization done in the LLMExecutor and will be added in future PRs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
llm_exprs = [] | ||
for expr in exprs: | ||
if is_llm_expression(expr): | ||
llm_exprs.append(expr.copy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a note here, the chained function call will not work here. For example STRTODATAFRAME(LLM('EXTRACT SOME COLUMN', data))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'll add that in next PR
new_root.append_child(plan_root) | ||
plan_root = new_root | ||
self._plan = plan_root | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, the generic way is to do it in optimizer with apply and merge. What will the plan looks like, if we have SELECT id, LLM(...) FROM some_table;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will be Project(id, llm.response) -> LLMExec() -> Get
def generate(self, prompts: List[str]) -> List[str]: | ||
import openai | ||
|
||
@retry(tries=6, delay=20) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be a good time to also add logging to the retry logic -
https://tenacity.readthedocs.io/en/latest/#before-and-after-retry-and-logging
This will log the retry attempts in our logger so the user knows when rate-limiting errors occur. I found this helpful when waiting for a long time. The downside is that we must add the tenacity library to the requirements.
try_to_import_tiktoken() | ||
import tiktoken | ||
|
||
encoding = tiktoken.encoding_for_model(self.model_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we already have the response, we can directly compute the cost using the response["usage"]
parameter?
Tiktoken would be good to estimate the cost before executing the query (helpful for query opt). Maybe we can have two functions, estimate_cost and get_cost. Estimating cost is not simple, though, because we do not know the completion tokens apriori. We would then need a heuristic for the estimated completion tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, a valid concern. I was also thinking about it.
@@ -21,3 +21,4 @@ | |||
IFRAMES = "IFRAMES" | |||
AUDIORATE = "AUDIORATE" | |||
DEFAULT_FUNCTION_EXPRESSION_COST = 100 | |||
LLM_FUNCTIONS = ["chatgpt", "completion"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we not adding the LLM operator to the parser? So, the allowed LLM names are restricted here?
SELECT DummyLLM({prompt}, data) FROM fruitTable;