-
Notifications
You must be signed in to change notification settings - Fork 621
Tiny fix bench tgv gemm #2277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Tiny fix bench tgv gemm #2277
Conversation
Signed-off-by: vincentzed <[email protected]>
π WalkthroughWalkthroughThe benchmark script refactors its timing implementation to use a centralized Changes
Estimated code review effortπ― 2 (Simple) | β±οΈ ~12 minutes Poem
Pre-merge checks and finishing touchesβ Failed checks (1 warning, 1 inconclusive)
β Passed checks (1 passed)
β¨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @vincentzed, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors the TGV GEMM benchmarking script by integrating a new, centralized utility function for GPU time measurement. The change aims to standardize the benchmarking process, improve code readability, and ensure consistent performance evaluation across CUBLAS, TGV, and PDL GEMM implementations. Highlights
π§ New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with π and π on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the bench_tgv_gemm.py benchmark script to use the bench_gpu_time_with_cudagraph utility function, which simplifies the code and improves maintainability by removing boilerplate CUDA graph benchmarking logic.
While the refactoring is a good improvement, I've noticed a change in the benchmarking methodology. The number of iterations captured within the CUDA graph has been implicitly changed from 100 to the default of 10. This can affect the benchmark results by changing how kernel launch overhead is amortized. I've added comments with suggestions to restore the original number of iterations to ensure benchmark consistency.
| cublas_times = bench_gpu_time_with_cudagraph( | ||
| lambda: F.linear(A, B.T, bias), | ||
| dry_run_time_ms=100, | ||
| repeat_time_ms=500, | ||
| cold_l2_cache=False, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous implementation captured 100 iterations within the CUDA graph to amortize launch overhead. The bench_gpu_time_with_cudagraph function defaults to num_iters_within_graph=10. To maintain consistency with the previous benchmarking methodology and ensure better amortization of kernel launch overhead, it's recommended to explicitly set num_iters_within_graph=100.
| cublas_times = bench_gpu_time_with_cudagraph( | |
| lambda: F.linear(A, B.T, bias), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| ) | |
| cublas_times = bench_gpu_time_with_cudagraph( | |
| lambda: F.linear(A, B.T, bias), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| num_iters_within_graph=100, | |
| ) |
| tgv_times = bench_gpu_time_with_cudagraph( | ||
| lambda: tgv_gemm_sm100(A, B, bias), | ||
| dry_run_time_ms=100, | ||
| repeat_time_ms=500, | ||
| cold_l2_cache=False, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous implementation captured 100 iterations within the CUDA graph to amortize launch overhead. The bench_gpu_time_with_cudagraph function defaults to num_iters_within_graph=10. To maintain consistency with the previous benchmarking methodology and ensure better amortization of kernel launch overhead, it's recommended to explicitly set num_iters_within_graph=100.
| tgv_times = bench_gpu_time_with_cudagraph( | |
| lambda: tgv_gemm_sm100(A, B, bias), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| ) | |
| tgv_times = bench_gpu_time_with_cudagraph( | |
| lambda: tgv_gemm_sm100(A, B, bias), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| num_iters_within_graph=100, | |
| ) |
| pdl_times = bench_gpu_time_with_cudagraph( | ||
| lambda: tgv_gemm_sm100(A, B, bias, pdl=True), | ||
| dry_run_time_ms=100, | ||
| repeat_time_ms=500, | ||
| cold_l2_cache=False, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous implementation captured 100 iterations within the CUDA graph to amortize launch overhead. The bench_gpu_time_with_cudagraph function defaults to num_iters_within_graph=10. To maintain consistency with the previous benchmarking methodology and ensure better amortization of kernel launch overhead, it's recommended to explicitly set num_iters_within_graph=100.
| pdl_times = bench_gpu_time_with_cudagraph( | |
| lambda: tgv_gemm_sm100(A, B, bias, pdl=True), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| ) | |
| pdl_times = bench_gpu_time_with_cudagraph( | |
| lambda: tgv_gemm_sm100(A, B, bias, pdl=True), | |
| dry_run_time_ms=100, | |
| repeat_time_ms=500, | |
| cold_l2_cache=False, | |
| num_iters_within_graph=100, | |
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
π§Ή Nitpick comments (3)
benchmarks/bench_tgv_gemm.py (3)
83-89: Consider usinginput_argsto avoid capturing loop variables in lambda.The lambda captures
A,B, andbiasfrom the loop scope. While this works becausebench_gpu_time_with_cudagraphexecutes immediately, usinginput_argswould be more explicit and eliminate the static analysis warning.π Suggested refactor using input_args
Per the
bench_gpu_time_with_cudagraphdocstring, you can pass arguments explicitly:-cublas_times = bench_gpu_time_with_cudagraph( - lambda: F.linear(A, B.T, bias), - dry_run_time_ms=100, - repeat_time_ms=500, - cold_l2_cache=False, -) +cublas_times = bench_gpu_time_with_cudagraph( + F.linear, + dry_run_time_ms=100, + repeat_time_ms=500, + cold_l2_cache=False, + input_args=(A, B.T, bias), +)
101-107: Consider usinginput_argsto avoid capturing loop variables in lambda.Same pattern as the CUBLAS benchmark: the lambda captures loop-scoped variables. Using
input_argswould eliminate the static analysis warning.π Suggested refactor using input_args
-tgv_times = bench_gpu_time_with_cudagraph( - lambda: tgv_gemm_sm100(A, B, bias), - dry_run_time_ms=100, - repeat_time_ms=500, - cold_l2_cache=False, -) +tgv_times = bench_gpu_time_with_cudagraph( + tgv_gemm_sm100, + dry_run_time_ms=100, + repeat_time_ms=500, + cold_l2_cache=False, + input_args=(A, B, bias), +)
114-120: Consider usinginput_argsandinput_kwargsto avoid capturing loop variables in lambda.Same lambda closure pattern, but with a keyword argument. Using
input_argsandinput_kwargswould eliminate the static analysis warning.π Suggested refactor using input_args and input_kwargs
-pdl_times = bench_gpu_time_with_cudagraph( - lambda: tgv_gemm_sm100(A, B, bias, pdl=True), - dry_run_time_ms=100, - repeat_time_ms=500, - cold_l2_cache=False, -) +pdl_times = bench_gpu_time_with_cudagraph( + tgv_gemm_sm100, + dry_run_time_ms=100, + repeat_time_ms=500, + cold_l2_cache=False, + input_args=(A, B, bias), + input_kwargs={"pdl": True}, +)
π Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
π Files selected for processing (1)
benchmarks/bench_tgv_gemm.py
π§° Additional context used
𧬠Code graph analysis (1)
benchmarks/bench_tgv_gemm.py (1)
flashinfer/testing/utils.py (1)
bench_gpu_time_with_cudagraph(1259-1481)
πͺ Ruff (0.14.10)
benchmarks/bench_tgv_gemm.py
84-84: Function definition does not bind loop variable A
(B023)
84-84: Function definition does not bind loop variable B
(B023)
84-84: Function definition does not bind loop variable bias
(B023)
102-102: Function definition does not bind loop variable A
(B023)
102-102: Function definition does not bind loop variable B
(B023)
102-102: Function definition does not bind loop variable bias
(B023)
115-115: Function definition does not bind loop variable A
(B023)
115-115: Function definition does not bind loop variable B
(B023)
115-115: Function definition does not bind loop variable bias
(B023)
π Additional comments (1)
benchmarks/bench_tgv_gemm.py (1)
10-10: LGTM!The import of
bench_gpu_time_with_cudagraphenables cleaner timing logic by replacing manual CUDA graph capture and replay.
π Description
π Related Issues
π Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
β Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.π§ͺ Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
βοΈ Tip: You can customize this high-level summary in your review settings.