You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Currently, logic in the main _blend() function only considers the default_model object when aggregating stats in the SmoothieMeta object.
Any models specified via a from_args() call to an ingredient, then, are ignored.
To Reproduce
model=OpenaiLLM(
"gpt-3.5-turbo",
config={
"temperature": 0.0
},
caching=False
)
query="""SELECT DISTINCT venue FROM w WHERE city = 'sydney' AND {{ LLMMap( 'More than 30 total points?', 'w::score' ) }} = TRUE"""ingredients= {
LLMMap.from_args(
model=TransformersLLM(
"HuggingFaceTB/SmolLM-135M-Instruct", caching=False, config={"chat_template": ChatMLTemplate, "device_map": "cpu"},
)
)
}
db=SQLite(
fetch_from_hub("1884_New_Zealand_rugby_union_tour_of_New_South_Wales_1.db")
)
smoothie=blend(query=query, db=db, ingredients=ingredients, default_model=model, verbose=True)
# Both these are empty print(smoothie.meta.raw_prompts)
print(smoothie.meta.prompts)
Expected behavior
We should track usages across all models invoked during a BlendSQL execution.
The text was updated successfully, but these errors were encountered:
Describe the bug
Currently, logic in the main
_blend()
function only considers thedefault_model
object when aggregating stats in theSmoothieMeta
object.Any models specified via a
from_args()
call to an ingredient, then, are ignored.To Reproduce
Expected behavior
We should track usages across all models invoked during a BlendSQL execution.
The text was updated successfully, but these errors were encountered: