Error in pip editable mode in export_llama #9278
Labels
module: build/install
Issues related to the cmake and buck2 builds, and to installing ExecuTorch
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
Repro: have a fresh ET git clone.
Run the command below and got the error:
The above command works well without the
--editable
option.File "/Users/myuan/src/executorch/examples/models/llama/export_llama_lib.py", line 25, in
from executorch.backends.vulkan._passes.remove_asserts import remove_asserts
File "/Users/myuan/src/executorch/backends/vulkan/init.py", line 7, in
from .partitioner.vulkan_partitioner import VulkanPartitioner
File "/Users/myuan/src/executorch/backends/vulkan/partitioner/vulkan_partitioner.py", line 16, in
from executorch.backends.vulkan.op_registry import (
File "/Users/myuan/src/executorch/backends/vulkan/op_registry.py", line 225, in
exir_ops.edge.quantized_decomposed.quantize_per_channel.default,
File "/Users/myuan/src/executorch/exir/dialects/_ops.py", line 104, in getattr
raise AttributeError(
AttributeError: '_OpNamespace' 'edge.quantized_decomposed' object has no attribute 'quantize_per_channel'
ERROR conda.cli.main_run:execute(47):
conda run python -m examples.models.llama.export_llama -p /Users/myuan/data/stories_110M/params.json -c /Users/myuan/data/stories_110M/stories110M.pt -X --xnnpack-extended-ops -qmode 8da4w -G 128 --use_kv_cache --use_sdpa_with_kv_cache --verbose --output_name test_sdpa_with_kv.pte
failed. (See above for error)Process finished with exit code 1
If I do
Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250311
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.31.4
Libc version: N/A
Python version: 3.10.16 (main, Dec 11 2024, 10:22:29) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] executorch==0.6.0a0+9a0c2db
[pip3] flake8==6.1.0
[pip3] flake8-breakpoint==1.1.0
[pip3] flake8-bugbear==24.4.26
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-plugin-utils==1.3.3
[pip3] flake8-pyi==23.5.0
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] torch==2.7.0.dev20250311
[pip3] torchao==0.10.0+git7d879462
[pip3] torchaudio==2.6.0.dev20250311
[pip3] torchsr==1.0.4
[pip3] torchtune==0.5.0
[pip3] torchvision==0.22.0.dev20250311
[conda] executorch 0.6.0a0+9a0c2db pypi_0 pypi
[conda] numpy 2.2.2 pypi_0 pypi
[conda] torch 2.7.0.dev20250311 pypi_0 pypi
[conda] torchao 0.10.0+git7d879462 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250311 pypi_0 pypi
[conda] torchfix 0.6.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchtune 0.5.0 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250311 pypi_0 pypi
cc @larryliu0820 @jathu @lucylq
The text was updated successfully, but these errors were encountered: