Skip to content

[benchmark] add h200 bench #1361

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Conversation

asaiacai
Copy link

@asaiacai asaiacai commented Jul 2, 2025

DO NOT MERGE: WIP

This is a baseline for multi-node pretraining on H200s, since currently there don't see seem to be any numbers out for H200.

fix attribution
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jul 2, 2025
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you rename to llama3_8b_h200_202506_trainy.md


### Results

Detailed performance results and training configurations can be found in the tables below along and can visualized in [this WandB report](https://wandb.ai/asaiacai/torchtitan/reports/Trainy-Llama-8B-32xH200-vs-64xH200--VmlldzoxMzIxMjMyMw#llama-8b). `TPS` and `Memory(GiB)` are arbitrarily sampled at the 100th iteration:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got

It looks like you’ve landed on a locked or empty page.

from the link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants