-
Notifications
You must be signed in to change notification settings - Fork 200
Description
Currently Tokenizer doesn't support white spaces between items of tuples and arrays, instead of being ignored they are passed into the lower-level parser often causing errors. It'd be great to just trim the white spaces, AFAIK they don't carry any useful information for any of the tokenized types.
For example parsing bytes[] from string [0x0a, 0x0b] fails with a cryptic message expected value of type: bytes[]. This is because the space between 0x0a and 0x0b isn't ignored, it's considered a part of 0x0b which confuses the hex decoder. The only valid payload is [0x0a,0x0b], which is difficult to guess and less human-readable.
For the context, I'm coming as a user of Foundry, where cast abi-encode is extremely tricky to use with arrays and tuples.