Skip to content

bug: Solidity-compatible hashes #123

Open
@enitrat

Description

@enitrat

There is a problem with the example of solidity-compatible hashes. The way it's currently done is that it calls keccak_u256s_be_inputs to hash a span composed of u256 words - however this function expects only full u256 words!

Consider the following:

use debug::PrintTrait;
use keccak::keccak_u256s_be_inputs;

#[test]
fn test_full_world() {
    let input_data: Span<u256> = array![
        0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
    ]
        .span();
    let hashed = keccak_u256s_be_inputs(input_data);

    // Split the hashed value into two 128-bit segments
    let low: u128 = hashed.low;
    let high: u128 = hashed.high;

    // Reverse each 128-bit segment
    let reversed_low = integer::u128_byte_reverse(low);
    let reversed_high = integer::u128_byte_reverse(high);

    // Reverse merge the reversed segments back into a u256 value
    let compatible_hash = u256 { low: reversed_high, high: reversed_low };

    assert(
        compatible_hash == 0xa9c584056064687e149968cbab758a3376d22aedc6a55823d1b3ecbee81b8fb9,
        'wrong hash'
    )
}

#[test]
fn test_partial_word() {
    let input_data: Span<u256> = array![0xAA].span();
    let hashed = keccak_u256s_be_inputs(input_data);

    // Split the hashed value into two 128-bit segments
    let low: u128 = hashed.low;
    let high: u128 = hashed.high;

    // Reverse each 128-bit segment
    let reversed_low = integer::u128_byte_reverse(low);
    let reversed_high = integer::u128_byte_reverse(high);

    // Reverse merge the reversed segments back into a u256 value
    let compatible_hash = u256 { low: reversed_high, high: reversed_low };

    assert(
        compatible_hash == 0xdb81b4d58595fbbbb592d3661a34cdca14d7ab379441400cbfa1b78bc447c365,
        'wrong hash'
    )
}

If you run the tests, you'll notice that it works for the full u256 word (with all bits set) but not for the word that is not full.

This is a bit tricky, but basically what you should do instead is:

  • Split the u256 words into u64 chunks
  • If you're hashing multiple words, you need to pack the partially-filled last u64 word of the input n with the first u64 word of the input n+1
  • This requires counting how many bytes each word actually uses.

See how it's implemented in Alexandria for bytes inputs, or Herodotus for u64-words inputs.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions