Description
BufferedTokenizerExt
throw an exception when the token discovered is bigger than sizeLimit
parameter. However, given the existing implementation the check is executed only on the first token present in the input fragment, this means that if it's the second token the one that exceed no error is raised:
While the implementation could be considered buggy on this aspect, it can be avoided selecting a sizeLimit
which is bigger than length of input fragment. This is related to the context where the tokenizer is used, considering the actual code base it's used with sizeLimit
only in json_lines codec:
This means that problem appear depending in which input the codec is used.
If used with TCP input https://github.com/logstash-plugins/logstash-input-tcp/blob/e5ef98f781ab921b6a1ef3bb1095d597e409ea86/lib/logstash/inputs/tcp.rb#L215 the decode_buffer
uses the codec passing the buffer read from socket, which for TCP could be a fragment of 64Kb.
To grab a more practical view of this issue check #16968 (comment)
Ideal solution
To solve this problem, the BufferedTokenizer 's extract method should return an iterator and not array (or list). The iterator should apply the boundary check on each next invocation.