You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to ask if a longer kv_range is used during inference? The current configuration in the code only uses one additional range chunk for both the 4.5B model and the 24B model. It seems the remaining kv cache is not being utilized.