Skip to content

Conversation

@abeeha123
Copy link

This PR introduces a temporary runtime patch to add missing non-persistent buffers (position_ids, token_type_ids) into the exported program’s state_dict. This resolves the KeyError that occurs during the Hugging Face to TVM import. Note: Still under testing; AssertionError (_assert_tensor_metadata.default) needs to be addressed.ds, token_type_ids) in the Relax frontend.

…e KeyError (position_ids, token_type_ids) in the Relax frontend.
@gemini-code-assist
Copy link

Summary of Changes

Hello @abeeha123, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request provides a temporary runtime solution to a KeyError encountered when importing Hugging Face models into TVM. The fix involves dynamically injecting non-persistent buffers, specifically position_ids and token_type_ids, into the exported program's state_dict to ensure they are available during the TVM import process. While this addresses the immediate KeyError, further work is noted to resolve an AssertionError related to tensor metadata.

Highlights

  • Problem Resolution: Addresses a KeyError that occurs during the Hugging Face to TVM import process by ensuring that missing non-persistent buffers are included in the exported program's state_dict.
  • Runtime Patch Implementation: Introduces a temporary runtime patch using StateDictWrapper and EPWrapper classes to dynamically inject position_ids and token_type_ids into the exported program's state_dict.
  • New Script Added: A new Python script, position_id_fix.py, has been added to demonstrate and implement this temporary fix for the buffer injection.
  • Known Limitation: The current implementation is still under testing and has a known AssertionError (_assert_tensor_metadata.default) that needs to be resolved.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a script to patch the Hugging Face model export process for TVM import. The patch addresses an issue with missing non-persistent buffers by wrapping the state_dict and the ExportedProgram object.

My review focuses on improving the script's robustness, readability, and adherence to Python best practices. I've suggested using standard library features like collections.ChainMap to simplify the dictionary wrapping, replacing a magic number with a configuration value, implementing a more robust proxy pattern for the ExportedProgram wrapper, and adhering to standard import conventions. These changes will make the script cleaner and more maintainable.

torch.manual_seed(0)
m = M().eval()

x = torch.randint(0, 30522, (2, 16))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The value 30522 is a magic number, which appears to be the vocabulary size for 'bert-base-multilingual-uncased'. It's better to fetch this value from the model's configuration to improve readability and maintainability. This makes the code more robust if the model changes.

Suggested change
x = torch.randint(0, 30522, (2, 16))
x = torch.randint(0, m.bert.config.vocab_size, (2, 16))

print(f"Injecting buffer: bert.embeddings.{buf_name} -> shape {tensor.shape}")

# Wrap exported state_dict
sd_wrapped = StateDictWrapper(ep.state_dict, extra)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The custom StateDictWrapper class can be replaced by collections.ChainMap for a more concise and idiomatic implementation. ChainMap is designed for linking multiple dictionaries.

After this change, you can remove the StateDictWrapper class definition (lines 8-22) and add import collections to the top of the file.

Suggested change
sd_wrapped = StateDictWrapper(ep.state_dict, extra)
sd_wrapped = collections.ChainMap(extra, ep.state_dict)

Comment on lines +56 to +65
class EPWrapper:
def __init__(self, ep, sd_wrapped):
self.__dict__["_ep"] = ep
self.__dict__["_sd"] = sd_wrapped

def __getattr__(self, name):
if name == "state_dict":
return self._sd
return getattr(self._ep, name)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This proxy implementation for ExportedProgram is minimal. A more robust and idiomatic way to create this wrapper is by using a property for state_dict. This avoids overriding __getattr__ in a way that could be brittle and makes the intent clearer. The suggested implementation is cleaner and less prone to subtle bugs if the from_exported_program API has more complex interactions with the object.

Suggested change
class EPWrapper:
def __init__(self, ep, sd_wrapped):
self.__dict__["_ep"] = ep
self.__dict__["_sd"] = sd_wrapped
def __getattr__(self, name):
if name == "state_dict":
return self._sd
return getattr(self._ep, name)
class EPWrapper:
def __init__(self, ep, sd_wrapped):
self._ep = ep
self._sd = sd_wrapped
@property
def state_dict(self):
return self._sd
def __getattr__(self, name):
return getattr(self._ep, name)

Comment on lines 74 to 75
import traceback
traceback.print_exc()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It is a standard Python convention (PEP 8) to place all imports at the top of the file. This improves readability and makes it easier to see the script's dependencies at a glance. Please move import traceback to the top of the script with the other imports.

Suggested change
import traceback
traceback.print_exc()
traceback.print_exc()

- Used collections.ChainMap for buffer injection
- Added property-based EPWrapper
- Removed hardcoded vocab size
- Moved imports to comply with PEP 8
@abeeha123 abeeha123 force-pushed the fix-relax-torch-keyerror branch 2 times, most recently from 1acca3e to e114e9c Compare November 4, 2025 09:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant