I just wondering whether this tool support chinese corpus. For example, do i suppose to use Jieba or other chinese tokenizer ? And is there interface reserved for chinese tokenizer... Thanks a lot.