Skip to content

Support spaces in Tokenizer  #295

@CodeSandwich

Description

@CodeSandwich

Currently Tokenizer doesn't support white spaces between items of tuples and arrays, instead of being ignored they are passed into the lower-level parser often causing errors. It'd be great to just trim the white spaces, AFAIK they don't carry any useful information for any of the tokenized types.

For example parsing bytes[] from string [0x0a, 0x0b] fails with a cryptic message expected value of type: bytes[]. This is because the space between 0x0a and 0x0b isn't ignored, it's considered a part of 0x0b which confuses the hex decoder. The only valid payload is [0x0a,0x0b], which is difficult to guess and less human-readable.

For the context, I'm coming as a user of Foundry, where cast abi-encode is extremely tricky to use with arrays and tuples.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions