Hi @rasbt, Maybe this is also related to #828, but there is the current definition of `Qwen3Tokenizer` for Qwen3 notebooks: ```python if USE_REASONING_MODEL: tokenizer_file_path = f"Qwen3-{CHOOSE_MODEL}/tokenizer.json" else: tokenizer_file_path = f"Qwen3-{CHOOSE_MODEL}-Base/tokenizer.json" hf_hub_download( repo_id=repo_id, filename="tokenizer.json", local_dir=local_dir, ) tokenizer = Qwen3Tokenizer( tokenizer_file_path=tokenizer_file_path, repo_id=repo_id, apply_chat_template=USE_REASONING_MODEL, add_generation_prompt=USE_REASONING_MODEL, add_thinking=not USE_INSTRUCT_MODEL ) ``` Maybe I didn't understand correctly the meaning of `USE_REASONING_MODEL` and `USE_INSTRUCT_MODEL`, but don't we need to swap them during initialization? ```python tokenizer = Qwen3Tokenizer( tokenizer_file_path=tokenizer_file_path, repo_id=repo_id, apply_chat_template=USE_INSTRUCT_MODEL, add_generation_prompt=USE_INSTRUCT_MODEL, add_thinking=not USE_REASONING_MODEL ) ```