Skip to content

kkrt-labs/zkvm-benchmarks

 
 

Repository files navigation

ZK-VM Benchmarks

This repo is inspired by grandchildrice ZK-VMs benchmarks, based on a16z ZK-VM benchmarks. grandchildrice benchmarks focuses on ZK-VMs capable of executing Rust programs (Jolt, Nexus, OpenVM, Pico, RiscZero, SP1, ZKM), aiming at fairness in conditions on powerful machines. The current repo extends these to ZK-VMs targeting zkDSLs, namely CairoM, Noir ProveKit and Miden.

The goal of the current benchmarks is to compare ZK-VMs in a context of client-side proving on consumer devices: laptop and mobile. The on-chain verification is discarded and proof size is less important.

As the language used to define programs might not exactly have the same features, an applicative approach has been taken: the benchmarked program is the most optimized version for a given zkDSL/ZK-VM. For example, the fibonacci program will use the native field of the ZK-VM when doable.

Benchmark state

ZK-VM Fibonacci SHA256
Cairo
Cairo0
Cairo M
Miden
Noir ProveKit
OpenVM
RiscZero
SP1
stark-v
Valida
ZKM

The following ZK-VMs are yet to be adapted: Ceno, Jolt, Nexus, Noir Barretenberg, Pico

Versions

The following table lists the versions of the ZK-VMs used in these benchmarks and their release dates.

ZK-VM Version / Commit Release Date Repository
Cairo stwo-cairo 4bf779d (main) Jan 2026 starkware-libs/stwo-cairo
Cairo0 cairo-vm v3.0.1 Dec 2025 lambdaclass/cairo-vm
Cairo M stwo v1.0.0 Nov 2025 starkware-libs/stwo
Miden 0.20.1 Dec 2025 0xMiden/miden-vm
Noir ProveKit worldfnd/ProveKit
OpenVM v1.4.2 Dec 8, 2025 openvm-org/openvm
RiscZero 3.0.4 Nov 24, 2025 risc0/risc0
SP1 5.2.4 Dec 15, 2025 succinctlabs/sp1
stark-v main branch Jan 2026 AntoineFONDEUR/stark-v
Valida 1.0.0 Sep 24, 2025 lita-xyz/valida
ZKM v1.2.3 Dec 10, 2025 ProjectZKM/Ziren

Benchmarks can be done on ARM64 (MacOS) and x86 architectures.

Prerequisites

MacOS

Linux arm64

Run Benchmarks

Local

Setup ZK-VM Toolchains

Install all required toolchains:

Linux x86-64

Launch benchmark

Either launch all benchmarks in a single command:

make bench-all

Or launch benchmark for a given ZK-VM:

make bench-<cairo|cairo-m|miden|noir-provekit|openvm|risczero|sp1|stark-v|valida|zkm>

Benchmark Details

Guest Programs

Not all the benchmarked ZK-VMs are working with similar guest programs. To still compare the different projects, an applicative approach has been taken: the most optimized version of the guest program for a project is used.

Fibonacci

The computation of the 10, 100, 1,000, 10,000 and 100,000 terms of the Fibonacci sequence are benchmarked.

For fibonacci, the program requirements are loose so the native field of a ZK-VM can be used in zkDSL, avoiding range checks while still computing the n-th term of a fibonacci sequence.

Security Level

To properly compare the various ZK-VM projects, all projects should have the same expected security level, expressed in bits. The following table summarizes the expected security level of the ZK-VMs in these benchmarks.

ZK-VM Security level (bits) Security model docs
Cairo 96 link
Cairo M 96 link
Miden 96 link
Noir ProveKit 128 link
OpenVM 100 link1, link2
RiscZero 96 link
SP1 100 link1, link2, link3
Valida 48 link
ZKM 100 link1, link2

For FRI-STARKs related ZK-VMs, the security level is conjectured based on proximity gap proofs and "Toy Problem" related to FRI . This means the soundness is not proven in the traditional cryptographic sense see paper.

The security level is tunable: $\text{conjectured\_security} = \text{n\_queries} * \log_2(\text{blowup\_factor}) + \text{pow\_bits}$

  • By modifying the number of queries, increasing the proof size
  • By modifying the number of PoW bits (grinding) slightly increasing the proving time.
  • By modifying the blowup factor, greatly increasing the proving time.

Results

The end-to-end (E2E) duration corresponds to a production context, where one executes the compiled program, and generate a proof from the runner output: $\text{e2e\_duration} = \text{exec\_duration} + \text{proof\_duration}$

MacBook M2 Max

Results can be found here. Note that any process on your device can influence the results.

Contributions

If there are inconsistencies, errors, or improvements, contributions are welcome.

About

Applicative ZkVM benchmarks

Resources

Stars

Watchers

Forks

Languages

  • Jupyter Notebook 91.0%
  • Rust 7.4%
  • Makefile 0.6%
  • Cairo 0.6%
  • Shell 0.2%
  • Noir 0.1%
  • WebAssembly 0.1%