Skip to content

Conversation

@gagika
Copy link
Collaborator

@gagika gagika commented Jan 15, 2026

Description

Adds --skip_first_token flag to tests/forward_pass_logit_checker.py. This ignores the first token during logit comparison, preventing logit comparison failures due to high entropy at the first token which is sensitivity to numeric accuracy especially for MoE models with many experts.

Tests

Verified locally on CPU. The script successfully ignored the initial token mismatch and passed the test criteria for the subsequent tokens using the new flag.

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link

codecov bot commented Jan 15, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@RissyRan
Copy link
Collaborator

Wondering if the 1st token doesn't match, will this affect following tokens?

@gagika
Copy link
Collaborator Author

gagika commented Jan 15, 2026

Wondering if the 1st token doesn't match, will this affect following tokens?

at first token entropy is quite high (a lot of plausible options), so large (and MoE) models can be sensitive to numerical accuracy issues (e.g. routing to different expert).

2nd, 3rd, and upcoming tokens are all conditioned on the previous tokens (including first), and have smaller entropy, (less sensitive to numerical issues).

Copy link
Collaborator

@shuningjin shuningjin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for identifying the noise in the initial token prediction and adding the option to skip! LGTM.

Comment on lines 306 to +309
max_logging.log("\n[logits: token 2]")
max_logging.log(f"{golden_logits_slice[2]=}")
max_logging.log(f"{train_logits_slice[2]=}")
if train_logits_slice.shape[0] > 2:
max_logging.log(f"{golden_logits_slice[2]=}")
max_logging.log(f"{train_logits_slice[2]=}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could you move this log inside if and update the index

if train_logits_slice.shape[0] > 2:
  max_logging.log(f"\n[logits: token {start_index+2}]")

same for max_logging.log("\n[probability: token 1]") in the below

Comment on lines 511 to +513
parser.add_argument("--clip_logits_epsilon", type=float, required=False, default=None)
parser.add_argument(
"--skip_first_token",
Copy link
Collaborator

@shuningjin shuningjin Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"--skip_first_token" is only effective when we compare against pre-generated logit file, but not when run hf model on the fly (run_hf_model=False).

  • For clarity, might be good to raise not implemented error if run_hf_model=false and skip_first_token=true`
  • same for "--clip_logits_epsilon"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants