-
Notifications
You must be signed in to change notification settings - Fork 99
Quantize unit tests to separate file #3591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Adds CUDA context management files (cuda_context.h and cuda_context.cpp) that provide similar functionality to the existing OpenCL context. The changes include: - CudaContext class inheriting from Context and Singleton - CUDA kernel management and execution interfaces - Build system updates to support CUDA with enable-cuda option - Conditional linking of CUDA runtime library for both Windows and Linux - Addition of enable-cuda option in meson_options.txt Signed-off-by: Daekyoung Jung <[email protected]>
This commit adds CUDA context management files (cuda_context.h and cuda_context.cpp) that provide similar functionality to the existing OpenCL context. The changes include: - Implementation of CudaContext class inheriting from Context and Singleton - CUDA kernel management and execution interface - Build system updates to support CUDA with enable-cuda meson_options - Conditional linking of CUDA runtime library for both Windows and Linux - Addition of enable-cuda option in meson_options.txt - Implementation of RMSNorm CUDA kernel and build configuration Signed-off-by: Daekyoung Jung <[email protected]>
This commit includes the following changes: 1. Add new CUDA unit test file (unittest_cuda.cpp) with RMSNorm CUDA kernel tests 2. Reorganize CUDA operations directory structure by moving subdir inclusion from nntrainer/meson.build to nntrainer/tensor/meson.build 3. Add CUDA test target in test/unittest/meson.build 4. Fix CUDA linking issues by adding proper link arguments (-NOIMPLIB, -NOEXP) to prevent generation of unnecessary .lib and .exp files 5. Add CUDA dependencies handling in unit test build configuration The changes ensure proper CUDA support in the build system and add comprehensive unit tests for CUDA operations. Signed-off-by: Daekyoung Jung <[email protected]>
This commit introduces CUDA support for addition operations:
1. Added new CUDA files:
- `nntrainer/tensor/cuda_operations/addition_cuda.cu`: Implementation
of CUDA addition kernel
- `nntrainer/tensor/cuda_operations/addition_cuda.h`: Header for CUDA
addition functions
- `nntrainer/tensor/cuda_operations/cuda_interface.cpp`: Implementation
of CUDA interface functions
- `nntrainer/tensor/cuda_operations/cuda_interface.h`: Header for CUDA
interface
2. Updated build configuration:
- Modified meson.build to include new CUDA files in the build
- Updated test/unittest/meson.build to add unittest_cuda_addition target
3. Added unit test:
- `test/unittest/unittest_cuda_addition.cpp`: Unit test for CUDA addition
operations with timing measurements
The new implementation provides:
- CUDA kernel for element-wise addition operations
- CUDA interface functions for tensor operations
- Comprehensive unit test with performance timing
Signed-off-by: Daekyoung Jung <[email protected]>
- Format all CUDA files in nntrainer/tensor/cuda_operations with clang-format - Add GGML Q8_1 quantization/dequantization implementation for CUDA - Include CPU fallback functions for quantization operations - Add unit tests for CUDA Q8_1 quantization functionality - Update meson build files to include new CUDA operations Signed-off-by: Daekyoung Jung <[email protected]>
- Move int4 quantization test from unittest_blas_kernels_cl.cpp to new unittest_quantize_cl.cpp for better organization - Create shared test utilities (unittest_util.h/cpp) with: * generate_random_vector template function * allocateSVM/freeSVM helper functions - Add unittest_util.cpp to OpenCL test targets in meson.build - Update blas_kernels to support shared test utilities - Add CUDA int4 GEMM kernel implementation (gemm_int4_cuda.cu/h) - Update GGML quantization headers and implementations
| * @date 28 Nov 2025 | ||
| * @brief CUDA implementation of int4 GEMM operation | ||
| * @see https://github.com/nnstreamer/nntrainer | ||
| * @author [Your Name] <[[email protected]]> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace actual name and email address?
| } | ||
| } | ||
| */ | ||
| void openvino_quantize_input_int4_pad(void *input, void *quantized_input, void *scales, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like this function is not relevant to the PR. It is better to make separate PR.
|
|
||
| void addition_cuda(const float *input, float *res, unsigned int size_input, | ||
| unsigned int size_res) { | ||
| const int blockSize = 256; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess blockSize can be set according to HW. If it is, it might be better to take as an input.
new unittest_quantize_cl.cpp for better organization