-
Notifications
You must be signed in to change notification settings - Fork 564
Description
System Info
I am running into a critical compatibility issue between optimum and recent versions of transformers.
❗ Error Summary
When using:
transformers==4.51.3
optimum==1.26.1
onnx==1.17.0
onnxruntime==1.20.0
The following runtime error is thrown when attempting to load an ONNX model using ORTModelForTokenClassification.from_pretrained:
AttributeError: type object 'AutoConfig' has no attribute 'from_dict'
This traces back to:
config = AutoConfig.from_pretrained(...)
# ↓ internally calls:
return CONFIG_MAPPING[pattern].from_dict(config_dict, **unused_kwargs)
However, in transformers>=4.48, the method AutoConfig.from_dict appears to have been deprecated or removed. This causes optimum to break at runtime when trying to load ONNX models.
📦 Package Versions
transformers - 4.51.3
optimum - 1.26.1
onnx - 1.17.0
onnxruntime - 1.20.0
torch - 2.2.6
Due to a security advisory, we're required to upgrade to transformers>=4.48. However, even with the latest optimum==1.26.1, it appears optimum is not yet updated for compatibility with changes introduced in recent transformers versions.
ASK:
Is support for transformers>=4.48 (particularly 4.51.3) planned in an upcoming optimum release?
Could this AutoConfig.from_dict dependency be refactored or conditionally patched to restore compatibility?
Is there a compatibility roadmap available between transformers and optimum for ONNX workflows?
Who can help?
No response
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction (minimal, reproducible, runnable)
Use transformers==4.51.3 and optimum==1.26.1
Load an exported ONNX model using ORTModelForTokenClassification.from_pretrained(...)
Observe the AttributeError about AutoConfig.from_dict
Expected behavior
When using optimum==1.26.1 with transformers>=4.48 (specifically 4.51.3), the following should work without error:
from optimum.onnxruntime import ORTModelForTokenClassification
model = ORTModelForTokenClassification.from_pretrained("path/to/onnx/model")
The model should load successfully using the ONNX Runtime backend.
Internally, AutoConfig.from_pretrained(...) should function correctly regardless of changes in the transformers API (e.g., deprecation/removal of from_dict).
ONNX workflows should remain compatible with newer transformers versions, allowing teams to benefit from critical updates and security patches without breaking ONNX integration.