Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in pip editable mode in export_llama #9278

Open
iseeyuan opened this issue Mar 14, 2025 · 2 comments
Open

Error in pip editable mode in export_llama #9278

iseeyuan opened this issue Mar 14, 2025 · 2 comments
Assignees
Labels
module: build/install Issues related to the cmake and buck2 builds, and to installing ExecuTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@iseeyuan
Copy link
Contributor

iseeyuan commented Mar 14, 2025

🐛 Describe the bug

Repro: have a fresh ET git clone.

git clone https://github.com/pytorch/executorch.git
cd executorch

git submodule sync
git submodule update --init

./install_executorch.sh --editable

Run the command below and got the error:

python -m examples.models.llama.export_llama -p /Users/myuan/data/stories_110M/params.json -c /Users/myuan/data/stories_110M/stories110M.pt -X --xnnpack-extended-ops -qmode 8da4w -G 128 --use_kv_cache --use_sdpa_with_kv_cache --verbose --output_name test_sdpa_with_kv.pte 
Traceback (most recent call last):
  File "/Users/myuan/src/executorch/exir/dialects/_ops.py", line 100, in __getattr__
    parent_packet = getattr(self._op_namespace, op_name)
  File "/Users/myuan/miniconda3/envs/executorch/lib/python3.10/site-packages/torch/_ops.py", line 1267, in __getattr__
    raise AttributeError(
AttributeError: '_OpNamespace' 'quantized_decomposed' object has no attribute 'quantize_per_channel'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/myuan/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/myuan/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/myuan/src/executorch/examples/models/llama/export_llama.py", line 20, in <module>
    from .export_llama_lib import build_args_parser, export_llama

The above command works well without the --editable option.
File "/Users/myuan/src/executorch/examples/models/llama/export_llama_lib.py", line 25, in
from executorch.backends.vulkan._passes.remove_asserts import remove_asserts
File "/Users/myuan/src/executorch/backends/vulkan/init.py", line 7, in
from .partitioner.vulkan_partitioner import VulkanPartitioner
File "/Users/myuan/src/executorch/backends/vulkan/partitioner/vulkan_partitioner.py", line 16, in
from executorch.backends.vulkan.op_registry import (
File "/Users/myuan/src/executorch/backends/vulkan/op_registry.py", line 225, in
exir_ops.edge.quantized_decomposed.quantize_per_channel.default,
File "/Users/myuan/src/executorch/exir/dialects/_ops.py", line 104, in getattr
raise AttributeError(
AttributeError: '_OpNamespace' 'edge.quantized_decomposed' object has no attribute 'quantize_per_channel'
ERROR conda.cli.main_run:execute(47): conda run python -m examples.models.llama.export_llama -p /Users/myuan/data/stories_110M/params.json -c /Users/myuan/data/stories_110M/stories110M.pt -X --xnnpack-extended-ops -qmode 8da4w -G 128 --use_kv_cache --use_sdpa_with_kv_cache --verbose --output_name test_sdpa_with_kv.pte failed. (See above for error)

Process finished with exit code 1

If I do

Versions

Collecting environment information...
PyTorch version: 2.7.0.dev20250311
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.31.4
Libc version: N/A

Python version: 3.10.16 (main, Dec 11 2024, 10:22:29) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Max

Versions of relevant libraries:
[pip3] executorch==0.6.0a0+9a0c2db
[pip3] flake8==6.1.0
[pip3] flake8-breakpoint==1.1.0
[pip3] flake8-bugbear==24.4.26
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-plugin-utils==1.3.3
[pip3] flake8-pyi==23.5.0
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] torch==2.7.0.dev20250311
[pip3] torchao==0.10.0+git7d879462
[pip3] torchaudio==2.6.0.dev20250311
[pip3] torchsr==1.0.4
[pip3] torchtune==0.5.0
[pip3] torchvision==0.22.0.dev20250311
[conda] executorch 0.6.0a0+9a0c2db pypi_0 pypi
[conda] numpy 2.2.2 pypi_0 pypi
[conda] torch 2.7.0.dev20250311 pypi_0 pypi
[conda] torchao 0.10.0+git7d879462 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250311 pypi_0 pypi
[conda] torchfix 0.6.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchtune 0.5.0 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250311 pypi_0 pypi

cc @larryliu0820 @jathu @lucylq

@iseeyuan iseeyuan added module: build/install Issues related to the cmake and buck2 builds, and to installing ExecuTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Mar 14, 2025
@github-project-automation github-project-automation bot moved this to To triage in ExecuTorch DevX Mar 14, 2025
@github-project-automation github-project-automation bot moved this to To triage in ExecuTorch Core Mar 14, 2025
@mergennachin mergennachin moved this from To triage to Ready in ExecuTorch DevX Mar 20, 2025
@larryliu0820
Copy link
Contributor

I think the issue is at the relative import from .export_llama_lib import build_args_parser, export_llama. I'll try to see if it's fixed after changing it to full import.

@larryliu0820
Copy link
Contributor

Unfortunately this is the same issue as #9558. We need to make progress on #8699 in order to make this work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: build/install Issues related to the cmake and buck2 builds, and to installing ExecuTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
Status: To triage
Status: Ready
Development

No branches or pull requests

3 participants