Skip to content

Benchmarks for executables #1

@PoignardAzur

Description

@PoignardAzur

Hello!

I've really enjoyed reading through these benchmarks, and drawing conclusions about how code is structured. The multi-crate build times in particular really help you see how many crates are bottlenecked on proc_macro2 + quote + syn.

That being said, I think there's a flaw in the methodology you used: you've pulled the most popular crates by download count, which means you've got ecosystem crates, dependencies that are used by a lot of projects. Their compilation profile might not be representative of the experience of an end-user project: ecosystem crates will try to use only the dependencies they strictly need, whereas "leaf" crates will likely pull lots of dependencies to cover a wide range of feature.

For instance, the serde crate will depend only on serde-derive, which means when compiling serde with the derive feature, there will be a tigh proc_macro2 -> quote -> syn -> serde_derive -> serde critical path.

A CLI program might pull serde, and clap, and tokio, which muddles the critical path: after syn comes both serde_derive and clap_derive and tokio-macros.

So, it might be interesting to publish a similar benchmark with popular rust programs. Some suggestions:

  • cargo
  • ripgrep
  • nushell
  • fdfind
  • bat
  • exa
  • tokei
  • wasmtime
  • uutils
  • diskonaut

I don't think I have the know-how or the resources to run those benchmarks myself, but I'd be super happy to help if I can, or if you can provide some mentoring.

Thanks again for your work!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions