-
Notifications
You must be signed in to change notification settings - Fork 81
Pull requests: intel/auto-round
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
support MTP params: copy, fp8 dequant, and WOQ RTN quantization
#1526
opened Mar 10, 2026 by
xin3he
Loading…
2 of 9 tasks
fix dynamic int8 w8a8 export issue with tuning
#1525
opened Mar 10, 2026 by
thuang6
Loading…
6 tasks
Support GLM-Image model quantizaiton
#1512
opened Mar 8, 2026 by
lvliang-intel
Loading…
2 of 9 tasks
Fix #1284: preserve FP8 format for layers specified in ignore_layers
#1511
opened Mar 8, 2026 by
LuciferDono
Loading…
5 tasks done
Support Qwen3 and Qwen2.5 Omni model quantization
#1404
opened Feb 4, 2026 by
lvliang-intel
Loading…
2 of 9 tasks
Refactor module access to use PyTorch get/set_submodule API
#1365
opened Jan 29, 2026 by
scopophobic
Loading…
support hadamard transform for mxfp4 with rtn or autoround method.
#1349
opened Jan 27, 2026 by
lkk12014402
Loading…
refactor init of compressor
engineering
ready
only add when the PR is ready to merge
#1339
opened Jan 26, 2026 by
n1ck-guo
Loading…
1 of 9 tasks
Robust FP8 layer detection for ignore_layers (#1283)
#1289
opened Jan 15, 2026 by
scopophobic
Loading…
Fix ignore_layers not working for FP8 models
#1286
opened Jan 15, 2026 by
Copilot
AI
Loading…
11 tasks done
[WIP][refactor quanizers][step 1] refactor rtn and tuning
#1278
opened Jan 14, 2026 by
n1ck-guo
Loading…
ProTip!
Adding no:label will show everything without a label.