You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it accurate to say 3.2-instruct was trained to produce this pythonic form (over JSON)? Therefore you will get better performance by prompting for this pythonic format over JSON?
You mention that this format is designed to be more flexible and powerful than the previous format. Would you mind sharing the rationale behind this? One benefit I see is that the format is more terse, and so it should be faster to generate than JSON. And perhaps if the model is trained on lots of Python code, it will perform better using a pythonic tool calling format. But JSON is easier to work with in non-python environments, so I was curious if there were other benefits to this new format.
My use case
I'm using this model in a non-python (JavaScript) environment so I would normally stick to a more standard JSON output format, but don't want to compromise performance if the above is true.
The text was updated successfully, but these errors were encountered:
Amazing work on Llama3.2.
In text prompt format you mention that 3.2 uses a new tool calling format that is pythonic in the form:
which is different from the previous JSON format in 3.1:
or
Questions
My use case
I'm using this model in a non-python (JavaScript) environment so I would normally stick to a more standard JSON output format, but don't want to compromise performance if the above is true.
The text was updated successfully, but these errors were encountered: