Replies: 5 comments
-
|
I am starting it like this: And here is my test request (on It generates the warning on each token. |
Beta Was this translation helpful? Give feedback.
-
|
Yes - seeing it on every token generation - really annoying and slows things down. This is a recent problem. Also see it with --jinja enabled. |
Beta Was this translation helpful? Give feedback.
-
|
I'm seeing it with Qwen 3.5 with the Jinja template in the commandline as well. It spams it on every generation, but it doesn't seem to slow things down at least...? |
Beta Was this translation helpful? Give feedback.
-
|
This message is displayed (I am using Qwen 3.5) after a recent update, with our without --no-jinja option. However, it doesn't affect the inference speed. It will fill up logs with this message though... |
Beta Was this translation helpful? Give feedback.
-
|
Now I found that 6c770d1 resolved annoying |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Same as the title, when I intentionally run
llama-serverwith—no-jiniaoption for a specific purpose, too manyNo parser definition detected, assuming pure content parser.warning messages are printed.Using the
—no-jinjaoption intentionally avoids the model's built-in prompt format written in Jinja, then is this error message still necessary?I wish this warning message would not be output when using the —no-jinja option.
Beta Was this translation helpful? Give feedback.
All reactions