Skip to content

Conversation

@dkjung
Copy link
Collaborator

@dkjung dkjung commented Dec 4, 2025

  • Add CPU implementation of int4 GEMM with packed block optimization
  • Add CUDA implementation of int4 GEMM operation
  • Add unit tests for int4 GEMM operations
  • Update quantize related files to support int4 operations
  • Fix function name from int4 to int8 in quantize functions
  • Add utility functions for matrix operations and vector generation

Signed-off-by: Daekyoung Jung [email protected]

Adds CUDA context management files (cuda_context.h
and cuda_context.cpp) that provide similar
functionality to the existing OpenCL context.

The changes include:

- CudaContext class inheriting from Context and Singleton
- CUDA kernel management and execution interfaces
- Build system updates to support CUDA with enable-cuda option
- Conditional linking of CUDA runtime library for both Windows and Linux
- Addition of enable-cuda option in meson_options.txt

Signed-off-by: Daekyoung Jung <[email protected]>
This commit adds CUDA context management files (cuda_context.h and cuda_context.cpp)
that provide similar functionality to the existing OpenCL context.

The changes include:

- Implementation of CudaContext class inheriting from Context and Singleton
- CUDA kernel management and execution interface
- Build system updates to support CUDA with enable-cuda meson_options
- Conditional linking of CUDA runtime library for both Windows and Linux
- Addition of enable-cuda option in meson_options.txt
- Implementation of RMSNorm CUDA kernel and build configuration

Signed-off-by: Daekyoung Jung <[email protected]>
This commit includes the following changes:
1. Add new CUDA unit test file (unittest_cuda.cpp) with RMSNorm CUDA kernel
   tests
2. Reorganize CUDA operations directory structure by moving subdir inclusion
   from nntrainer/meson.build to nntrainer/tensor/meson.build
3. Add CUDA test target in test/unittest/meson.build
4. Fix CUDA linking issues by adding proper link arguments (-NOIMPLIB, -NOEXP)
   to prevent generation of unnecessary .lib and .exp files
5. Add CUDA dependencies handling in unit test build configuration

The changes ensure proper CUDA support in the build system and add
comprehensive unit tests for CUDA operations.

Signed-off-by: Daekyoung Jung <[email protected]>
This commit introduces CUDA support for addition operations:

1. Added new CUDA files:
   - `nntrainer/tensor/cuda_operations/addition_cuda.cu`: Implementation
     of CUDA addition kernel
   - `nntrainer/tensor/cuda_operations/addition_cuda.h`: Header for CUDA
     addition functions
   - `nntrainer/tensor/cuda_operations/cuda_interface.cpp`: Implementation
     of CUDA interface functions
   - `nntrainer/tensor/cuda_operations/cuda_interface.h`: Header for CUDA
     interface

2. Updated build configuration:
   - Modified meson.build to include new CUDA files in the build
   - Updated test/unittest/meson.build to add unittest_cuda_addition target

3. Added unit test:
   - `test/unittest/unittest_cuda_addition.cpp`: Unit test for CUDA addition
     operations with timing measurements

The new implementation provides:
- CUDA kernel for element-wise addition operations
- CUDA interface functions for tensor operations
- Comprehensive unit test with performance timing

Signed-off-by: Daekyoung Jung <[email protected]>
- Format all CUDA files in nntrainer/tensor/cuda_operations with clang-format
- Add GGML Q8_1 quantization/dequantization implementation for CUDA
- Include CPU fallback functions for quantization operations
- Add unit tests for CUDA Q8_1 quantization functionality
- Update meson build files to include new CUDA operations

Signed-off-by: Daekyoung Jung <[email protected]>
- Move int4 quantization test from unittest_blas_kernels_cl.cpp to
  new unittest_quantize_cl.cpp for better organization
- Create shared test utilities (unittest_util.h/cpp) with:
  * generate_random_vector template function
  * allocateSVM/freeSVM helper functions
- Add unittest_util.cpp to OpenCL test targets in meson.build
- Update blas_kernels to support shared test utilities
- Add CUDA int4 GEMM kernel implementation (gemm_int4_cuda.cu/h)
- Update GGML quantization headers and implementations
This commit adds CUDA impl for INT4 quantization with padding.
It matches the existing OpenCL kernel behavior for compatibility.

Changes:
- Add quantize_input_int4_pad_kernel CUDA kernel function
- Add quantize_input_int4_pad_cuda wrapper function
- Update unit tests to use new CPU reference implementation
- Add round_half_to_even helper function for rounding to nearest even
- Add cpu_quantize_input_int4_pad CPU reference implementation
- Add unittest_util.cpp to unittest_cuda_quantize target
- Add new test case for M=63, K=3072, G=32

Signed-off-by: Daekyoung Jung <[email protected]>
- Add CPU implementation of int4 GEMM with packed block optimization
- Add CUDA implementation of int4 GEMM operation
- Add unit tests for int4 GEMM operations
- Update quantize related files to support int4 operations
- Fix function name from int4 to int8 in quantize functions
- Add utility functions for matrix operations and vector generation

Signed-off-by: Daekyoung Jung <[email protected]>
// Load input values and scale
// k_start and k_start+1 are in the same group (assuming group_size >=
// 2)
unsigned int group_id_in_row = k_start / quantization_group_size;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not need to be calculated in this loop. This can be moved out of this loop block to save calculation.


// Input indices
// Note: input is quantized with padding, so we use block addressing
unsigned int offset_in_group = k_start % quantization_group_size;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not need to be calculated in this loop. This can be moved out of this loop block to save calculation.

Copy link
Member

@myungjoo myungjoo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unit test should include the following cases:

  • N % 32 > 0
  • K % 2 > 0
  • Small M (M=1, 5)
  • CUDA Error Handling (calling CUDA-enabled binaries in non-CUDA environment. see if it handles the error nicely)


// 2. Compute dot product with each row of input (M)
for (unsigned int m = 0; m < M; ++m) {
float sum = 0.0f;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Please check if having loop-m inside loop-k would be faster than loop-k inside loop-m or not.
  2. Try to cache caculation as much as possible. (looks like the same calculation/memory-op is repeated in the loop)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants