I'm deploying the model mentioned below and setting the env vars as below but still see that it is truncating the context length to only 1513. Why are the env vars not changing the context length?
Messages in log:
"Request length is longer than the KV cache pool size or the max context length. Truncated"
"Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 1513, cache hit rate: 96.80%, token usage: 1.00, #running-req: 0, #queue-req: 0"
model: https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned-GPTQ-Int8
ENV vars: context_length=4096, max_total_tokens=4096, max_prefill_tokens=4096, qunatization=gptq_marlin