Skip to content

Conversation

@yasxai
Copy link

@yasxai yasxai commented Jan 16, 2026

Summary

This PR adds a script to convert Gemma 3 checkpoints from HuggingFace (safetensors) format to MaxText-compatible Orbax format.

Problem

  • MaxText's existing convert_gemma3_chkpt.py only supports Kaggle JAX/Flax checkpoints
  • HuggingFace hosts Gemma 3 in PyTorch/safetensors format only
  • Kaggle authentication issues prevent many users from downloading Flax weights

Solution

A new script convert_gemma3_hf_to_maxtext.py that:

  1. Loads HuggingFace safetensors files
  2. Converts bfloat16 weights to NumPy arrays (preserving precision)
  3. Reshapes attention/MLP weights to MaxText's expected layout
  4. Applies RMSNorm offset (+1.0) as required by Gemma architecture
  5. Saves directly to GCS in Orbax checkpoint format

Tested on

  • Model: google/gemma-3-27b-it
  • Hardware: TPU v4-32 pod
  • Python: 3.10.12

Usage

python convert_gemma3_hf_to_maxtext.py \
  --input_path=/path/to/gemma3-27b-hf \
  --output_path=gs://bucket/gemma3-27b-maxtext \
  --model_size=27b

@google-cla
Copy link

google-cla bot commented Jan 16, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant