Closed
Description
Name and Version
ggml_vulkan: Found 2 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon PRO W7900 (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
ggml_vulkan: 1 = AMD Radeon PRO W7900 (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
version: 5572 (7675c55)
built with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
Operating systems
Linux
GGML backends
Vulkan
Hardware
2x Radeon Pro W7900
Models
No response
Problem description & steps to reproduce
when I run llama-server (and llama-bench) with ubatch=2048 on Qwen 3-30B, I encounter a GGML_ASSERT(nei0 * nei1 <= 4096) failed error. Qwen 3-32B works pefectly
First Bad Commit
No response
Relevant log output
ultimis@ultimis-desktop:~/LLM/llama.cpp/build/bin$ ./llama-server -m /home/ultimis/LLM/Models/Qwen3-30B-A3B-UD-Q4_K_XL.gguf -c 100000 -ngl 999 --split-mode none --host 0.0.0.0 --port 8081 --flash-attn -ctv q8_0 -ctk q8_0 -ub 2048
ggml_vulkan: Found 2 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon PRO W7900 (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
ggml_vulkan: 1 = AMD Radeon PRO W7900 (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
build: 5572 (7675c555) with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
system info: n_threads = 64, n_threads_batch = 64, total_threads = 128
system_info: n_threads = 64 (n_threads_batch = 64) / 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 |
main: binding port with default address family
main: HTTP server is listening, hostname: 0.0.0.0, port: 8081, http threads: 127
main: loading model
srv load_model: loading model '/home/ultimis/LLM/Models/Qwen3-30B-A3B-UD-Q4_K_XL.gguf'
llama_model_load_from_file_impl: using device Vulkan0 (AMD Radeon PRO W7900 (RADV NAVI31)) - 49136 MiB free
llama_model_loader: loaded meta data with 35 key-value pairs and 579 tensors from /home/ultimis/LLM/Models/Qwen3-30B-A3B-UD-Q4_K_XL.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3moe
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen3-30B-A3B
llama_model_loader: - kv 3: general.basename str = Qwen3-30B-A3B
llama_model_loader: - kv 4: general.quantized_by str = Unsloth
llama_model_loader: - kv 5: general.size_label str = 30B-A3B
llama_model_loader: - kv 6: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 7: qwen3moe.block_count u32 = 48
llama_model_loader: - kv 8: qwen3moe.context_length u32 = 40960
llama_model_loader: - kv 9: qwen3moe.embedding_length u32 = 2048
llama_model_loader: - kv 10: qwen3moe.feed_forward_length u32 = 6144
llama_model_loader: - kv 11: qwen3moe.attention.head_count u32 = 32
llama_model_loader: - kv 12: qwen3moe.attention.head_count_kv u32 = 4
llama_model_loader: - kv 13: qwen3moe.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 14: qwen3moe.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 15: qwen3moe.expert_used_count u32 = 8
llama_model_loader: - kv 16: qwen3moe.attention.key_length u32 = 128
llama_model_loader: - kv 17: qwen3moe.attention.value_length u32 = 128
llama_model_loader: - kv 18: qwen3moe.expert_count u32 = 128
llama_model_loader: - kv 19: qwen3moe.expert_feed_forward_length u32 = 768
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 26: tokenizer.ggml.padding_token_id u32 = 151654
llama_model_loader: - kv 27: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - kv 30: general.file_type u32 = 15
llama_model_loader: - kv 31: quantize.imatrix.file str = Qwen3-30B-A3B-GGUF/imatrix_unsloth.dat
llama_model_loader: - kv 32: quantize.imatrix.dataset str = unsloth_calibration_Qwen3-30B-A3B.txt
llama_model_loader: - kv 33: quantize.imatrix.entries_count i32 = 384
llama_model_loader: - kv 34: quantize.imatrix.chunks_count i32 = 685
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q4_K: 290 tensors
llama_model_loader: - type q5_K: 37 tensors
llama_model_loader: - type q6_K: 11 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 16.49 GiB (4.64 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3moe
print_info: vocab_only = 0
print_info: n_ctx_train = 40960
print_info: n_embd = 2048
print_info: n_layer = 48
print_info: n_head = 32
print_info: n_head_kv = 4
print_info: n_rot = 128
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 512
print_info: n_embd_v_gqa = 512
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 6144
print_info: n_expert = 128
print_info: n_expert_used = 8
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 40960
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 30B.A3B
print_info: model params = 30.53 B
print_info: general.name = Qwen3-30B-A3B
print_info: n_ff_exp = 768
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 11 ','
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151654 '<|vision_pad|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 48 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 49/49 layers to GPU
load_tensors: Vulkan0 model buffer size = 16722.36 MiB
load_tensors: CPU_Mapped model buffer size = 166.92 MiB
....................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 100000
llama_context: n_ctx_per_seq = 100000
llama_context: n_batch = 2048
llama_context: n_ubatch = 2048
llama_context: causal_attn = 1
llama_context: flash_attn = 1
llama_context: freq_base = 1000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (100000) > n_ctx_train (40960) -- possible training context overflow
llama_context: Vulkan_Host output buffer size = 0.58 MiB
llama_kv_cache_unified: Vulkan0 KV buffer size = 4985.25 MiB
llama_kv_cache_unified: size = 4985.25 MiB (100096 cells, 48 layers, 1 seqs), K (q8_0): 2492.62 MiB, V (q8_0): 2492.62 MiB
llama_context: Vulkan0 compute buffer size = 1269.02 MiB
llama_context: Vulkan_Host compute buffer size = 798.02 MiB
llama_context: graph nodes = 3031
llama_context: graph splits = 2
common_init_from_params: setting dry_penalty_last_n to ctx_size = 100096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
srv init: initializing slots, n_slots = 1
slot init: id 0 | task -1 | new slot n_ctx_slot = 100096
main: model loaded
main: chat template, chat_template: {%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0].role == 'system' %}
{{- messages[0].content + '\n\n' }}
{%- endif %}
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0].role == 'system' %}
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
{%- for forward_message in messages %}
{%- set index = (messages|length - 1) - loop.index0 %}
{%- set message = messages[index] %}
{%- set current_content = message.content if message.content is defined and message.content is not none else '' %}
{%- set tool_start = '<tool_response>' %}
{%- set tool_start_length = tool_start|length %}
{%- set start_of_message = current_content[:tool_start_length] %}
{%- set tool_end = '</tool_response>' %}
{%- set tool_end_length = tool_end|length %}
{%- set start_pos = (current_content|length) - tool_end_length %}
{%- if start_pos < 0 %}
{%- set start_pos = 0 %}
{%- endif %}
{%- set end_of_message = current_content[start_pos:] %}
{%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %}
{%- set ns.multi_step_tool = false %}
{%- set ns.last_query_index = index %}
{%- endif %}
{%- endfor %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{%- set m_content = message.content if message.content is defined and message.content is not none else '' %}
{%- set content = m_content %}
{%- set reasoning_content = '' %}
{%- if message.reasoning_content is defined and message.reasoning_content is not none %}
{%- set reasoning_content = message.reasoning_content %}
{%- else %}
{%- if '</think>' in m_content %}
{%- set content = (m_content.split('</think>')|last).lstrip('\n') %}
{%- set reasoning_content = (m_content.split('</think>')|first).rstrip('\n') %}
{%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %}
{%- endif %}
{%- endif %}
{%- if loop.index0 > ns.last_query_index %}
{%- if loop.last or (not loop.last and (not reasoning_content.strip() == '')) %}
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- if message.tool_calls %}
{%- for tool_call in message.tool_calls %}
{%- if (loop.first and content) or (not loop.first) %}
{{- '\n' }}
{%- endif %}
{%- if tool_call.function %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{%- if tool_call.arguments is string %}
{{- tool_call.arguments }}
{%- else %}
{{- tool_call.arguments | tojson }}
{%- endif %}
{{- '}\n</tool_call>' }}
{%- endfor %}
{%- endif %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- if enable_thinking is defined and enable_thinking is false %}
{{- '<think>\n\n</think>\n\n' }}
{%- endif %}
{%- endif %}, example_format: '<|im_start|>system
You are a helpful assistant<|im_end|>
<|im_start|>user
Hello<|im_end|>
<|im_start|>assistant
Hi there<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
'
main: server is listening on http://0.0.0.0:8081 - starting the main loop
srv update_slots: all slots are idle
srv log_server_r: request: GET /v1/models 172.18.0.6 200
srv log_server_r: request: GET /v1/models 172.18.0.6 200
srv log_server_r: request: GET /v1/models 172.18.0.6 200
srv log_server_r: request: GET /v1/models 172.18.0.6 200
srv log_server_r: request: GET /v1/models 172.18.0.6 200
srv log_server_r: request: GET /v1/models 172.18.0.6 200
srv log_server_r: request: GET /v1/models 172.18.0.6 200
srv log_server_r: request: GET /v1/models 172.18.0.6 200
srv log_server_r: request: GET /v1/models 172.18.0.6 200
srv params_from_: Chat format: Content-only
slot launch_slot_: id 0 | task 0 | processing task
slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 32768, n_keep = 0, n_prompt_tokens = 17906
slot update_slots: id 0 | task 0 | kv cache rm [0, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 2048, n_tokens = 2048, progress = 0.114375
/home/ultimis/LLM/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:5378: GGML_ASSERT(nei0 * nei1 <= 4096) failed
[New LWP 33408]
[New LWP 33407]
[New LWP 33406]
[New LWP 33405]
[New LWP 33404]
[New LWP 33403]
[New LWP 33402]
[New LWP 33401]
[New LWP 33400]
[New LWP 33399]
[New LWP 33398]
[New LWP 33397]
[New LWP 33396]
[New LWP 33395]
[New LWP 33394]
[New LWP 33393]
[New LWP 33392]
[New LWP 33391]
[New LWP 33390]
[New LWP 33389]
[New LWP 33388]
[New LWP 33387]
[New LWP 33386]
[New LWP 33385]
[New LWP 33384]
[New LWP 33383]
[New LWP 33382]
[New LWP 33381]
[New LWP 33380]
[New LWP 33379]
[New LWP 33378]
[New LWP 33377]
[New LWP 33376]
[New LWP 33375]
[New LWP 33374]
[New LWP 33373]
[New LWP 33372]
[New LWP 33371]
[New LWP 33370]
[New LWP 33369]
[New LWP 33368]
[New LWP 33367]
[New LWP 33366]
[New LWP 33365]
[New LWP 33364]
[New LWP 33363]
[New LWP 33362]
[New LWP 33361]
[New LWP 33360]
[New LWP 33359]
[New LWP 33358]
[New LWP 33357]
[New LWP 33356]
[New LWP 33355]
[New LWP 33354]
[New LWP 33353]
[New LWP 33352]
[New LWP 33351]
[New LWP 33350]
[New LWP 33349]
[New LWP 33348]
[New LWP 33347]
[New LWP 33346]
[New LWP 33345]
[New LWP 33344]
[New LWP 33343]
[New LWP 33342]
[New LWP 33341]
[New LWP 33340]
[New LWP 33339]
[New LWP 33338]
[New LWP 33337]
[New LWP 33336]
[New LWP 33335]
[New LWP 33334]
[New LWP 33333]
[New LWP 33332]
[New LWP 33331]
[New LWP 33330]
[New LWP 33329]
[New LWP 33328]
[New LWP 33327]
[New LWP 33326]
[New LWP 33325]
[New LWP 33324]
[New LWP 33323]
[New LWP 33322]
[New LWP 33321]
[New LWP 33320]
[New LWP 33319]
[New LWP 33318]
[New LWP 33317]
[New LWP 33316]
[New LWP 33315]
[New LWP 33314]
[New LWP 33313]
[New LWP 33312]
[New LWP 33311]
[New LWP 33310]
[New LWP 33309]
[New LWP 33308]
[New LWP 33307]
[New LWP 33306]
[New LWP 33305]
[New LWP 33304]
[New LWP 33303]
[New LWP 33302]
[New LWP 33301]
[New LWP 33300]
[New LWP 33299]
[New LWP 33298]
[New LWP 33297]
[New LWP 33296]
[New LWP 33295]
[New LWP 33294]
[New LWP 33293]
[New LWP 33292]
[New LWP 33291]
[New LWP 33290]
[New LWP 33289]
[New LWP 33288]
[New LWP 33287]
[New LWP 33286]
[New LWP 33285]
[New LWP 33284]
[New LWP 33283]
[New LWP 33282]
[New LWP 33281]
[New LWP 33280]
[New LWP 33278]
[New LWP 33277]
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/libvulkan_lvp.so
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libtinfo.so.6
warning: could not find '.gnu_debugaltlink' file for /usr/lib/x86_64-linux-gnu/libvulkan_radeon.so
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libVkLayer_MESA_device_select.so
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f9b76b107e3 in __GI___wait4 (pid=33552, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
warning: 30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory
#0 0x00007f9b76b107e3 in __GI___wait4 (pid=33552, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30 in ../sysdeps/unix/sysv/linux/wait4.c
#1 0x00007f9b77140d23 in ggml_print_backtrace () from /home/ultimis/LLM/llama.cpp/build/bin/libggml-base.so
#2 0x00007f9b77140e3f in ggml_abort () from /home/ultimis/LLM/llama.cpp/build/bin/libggml-base.so
#3 0x00007f9b74f51ca2 in ggml_vk_mul_mat_id_q_f16(ggml_backend_vk_context*, std::shared_ptr<vk_context_struct>&, ggml_tensor const*, ggml_tensor const*, ggml_tensor const*, ggml_tensor*, bool) () from /home/ultimis/LLM/llama.cpp/build/bin/libggml-vulkan.so
#4 0x00007f9b74f676f3 in ggml_vk_build_graph(ggml_backend_vk_context*, ggml_tensor*, int, ggml_tensor*, int, bool, bool, bool, bool) () from /home/ultimis/LLM/llama.cpp/build/bin/libggml-vulkan.so
#5 0x00007f9b74f68abc in ggml_backend_vk_graph_compute(ggml_backend*, ggml_cgraph*) () from /home/ultimis/LLM/llama.cpp/build/bin/libggml-vulkan.so
#6 0x00007f9b77156c93 in ggml_backend_sched_graph_compute_async () from /home/ultimis/LLM/llama.cpp/build/bin/libggml-base.so
#7 0x00007f9b7728dc11 in llama_context::graph_compute(ggml_cgraph*, bool) () from /home/ultimis/LLM/llama.cpp/build/bin/libllama.so
#8 0x00007f9b7728e163 in llama_context::process_ubatch(llama_ubatch const&, llm_graph_type, llama_memory_state_i*, ggml_status&) () from /home/ultimis/LLM/llama.cpp/build/bin/libllama.so
#9 0x00007f9b77292066 in llama_context::decode(llama_batch&) () from /home/ultimis/LLM/llama.cpp/build/bin/libllama.so
#10 0x00007f9b7729322f in llama_decode () from /home/ultimis/LLM/llama.cpp/build/bin/libllama.so
#11 0x000062e49ccd8a52 in server_context::update_slots() ()
#12 0x000062e49cca05cc in server_queue::start_loop() ()
#13 0x000062e49cc68eb0 in main ()
[Inferior 1 (process 33276) detached]
^CAborted (core dumped)