Skip to content

Remove graph breaks for torch.compile() in flash_attention_forward when Lllama Model is padding free tuned #511

Remove graph breaks for torch.compile() in flash_attention_forward when Lllama Model is padding free tuned

Remove graph breaks for torch.compile() in flash_attention_forward when Lllama Model is padding free tuned #511

Re-run triggered October 18, 2024 11:54
Status Failure
Total duration 4m 0s
Artifacts

benchmark.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

1 error and 1 warning
Benchmark
Process completed with exit code 2.
Benchmark
This job failure may be caused by using an out of date self-hosted runner. You are currently using runner version 2.319.1. Please update to the latest version 2.320.0