- 
                Notifications
    
You must be signed in to change notification settings  - Fork 3.7k
 
Implement a runtime fix for the Hugging Face export that addresses th… #18408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…e KeyError (position_ids, token_type_ids) in the Relax frontend.
          Summary of ChangesHello @abeeha123, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request provides a temporary runtime solution to a  Highlights
 Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either  
 Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a  Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
  | 
    
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a script to patch the Hugging Face model export process for TVM import. The patch addresses an issue with missing non-persistent buffers by wrapping the state_dict and the ExportedProgram object.
My review focuses on improving the script's robustness, readability, and adherence to Python best practices. I've suggested using standard library features like collections.ChainMap to simplify the dictionary wrapping, replacing a magic number with a configuration value, implementing a more robust proxy pattern for the ExportedProgram wrapper, and adhering to standard import conventions. These changes will make the script cleaner and more maintainable.
| torch.manual_seed(0) | ||
| m = M().eval() | ||
| 
               | 
          ||
| x = torch.randint(0, 30522, (2, 16)) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value 30522 is a magic number, which appears to be the vocabulary size for 'bert-base-multilingual-uncased'. It's better to fetch this value from the model's configuration to improve readability and maintainability. This makes the code more robust if the model changes.
| x = torch.randint(0, 30522, (2, 16)) | |
| x = torch.randint(0, m.bert.config.vocab_size, (2, 16)) | 
| print(f"Injecting buffer: bert.embeddings.{buf_name} -> shape {tensor.shape}") | ||
| 
               | 
          ||
| # Wrap exported state_dict | ||
| sd_wrapped = StateDictWrapper(ep.state_dict, extra) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The custom StateDictWrapper class can be replaced by collections.ChainMap for a more concise and idiomatic implementation. ChainMap is designed for linking multiple dictionaries.
After this change, you can remove the StateDictWrapper class definition (lines 8-22) and add import collections to the top of the file.
| sd_wrapped = StateDictWrapper(ep.state_dict, extra) | |
| sd_wrapped = collections.ChainMap(extra, ep.state_dict) | 
| class EPWrapper: | ||
| def __init__(self, ep, sd_wrapped): | ||
| self.__dict__["_ep"] = ep | ||
| self.__dict__["_sd"] = sd_wrapped | ||
| 
               | 
          ||
| def __getattr__(self, name): | ||
| if name == "state_dict": | ||
| return self._sd | ||
| return getattr(self._ep, name) | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This proxy implementation for ExportedProgram is minimal. A more robust and idiomatic way to create this wrapper is by using a property for state_dict. This avoids overriding __getattr__ in a way that could be brittle and makes the intent clearer. The suggested implementation is cleaner and less prone to subtle bugs if the from_exported_program API has more complex interactions with the object.
| class EPWrapper: | |
| def __init__(self, ep, sd_wrapped): | |
| self.__dict__["_ep"] = ep | |
| self.__dict__["_sd"] = sd_wrapped | |
| def __getattr__(self, name): | |
| if name == "state_dict": | |
| return self._sd | |
| return getattr(self._ep, name) | |
| class EPWrapper: | |
| def __init__(self, ep, sd_wrapped): | |
| self._ep = ep | |
| self._sd = sd_wrapped | |
| @property | |
| def state_dict(self): | |
| return self._sd | |
| def __getattr__(self, name): | |
| return getattr(self._ep, name) | 
| import traceback | ||
| traceback.print_exc() | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a standard Python convention (PEP 8) to place all imports at the top of the file. This improves readability and makes it easier to see the script's dependencies at a glance. Please move import traceback to the top of the script with the other imports.
| import traceback | |
| traceback.print_exc() | |
| traceback.print_exc() | 
- Used collections.ChainMap for buffer injection - Added property-based EPWrapper - Removed hardcoded vocab size - Moved imports to comply with PEP 8
1acca3e    to
    e114e9c      
    Compare
  
    
This PR introduces a temporary runtime patch to add missing non-persistent buffers (
position_ids,token_type_ids) into the exported program’s state_dict. This resolves the KeyError that occurs during the Hugging Face to TVM import. Note: Still under testing; AssertionError (_assert_tensor_metadata.default) needs to be addressed.ds, token_type_ids) in the Relax frontend.