When the model output contains content inside tags, I'm not sure whether it's better to omit this content or keep it.
Here's why I'm asking. In the future, we'll need ways to measure the performance of models similar to DeepSeek in general aspects (have rather long thinking, this brings length bias maybe but people read the thinking process when waiting...).
Curious to know your thoughts on this. How can we better measure this kind of model output?
Looking forward to your insights. Thanks!