Skip to content

[DeepSeek][Kernels] MoE sorting - Scatter Gather kernels #1065

Open
lessw2020 wants to merge 10 commits intomainfrom
lessw2020/moe_sorting_kernels
Open

[DeepSeek][Kernels] MoE sorting - Scatter Gather kernels #1065
lessw2020 wants to merge 10 commits intomainfrom
lessw2020/moe_sorting_kernels

Conversation

@lessw2020
Copy link
Contributor

encapsulating scatter/gather in Cuda kernels. Effectively does the following PyTorch actions but via CUDA:

def pytorch_sort_tokens(topk_ids, x, n_experts):
    """PyTorch implementation for comparison"""
    with torch.no_grad():
        # [seq_len, n_experts]
        cnts = topk_ids.new_zeros((topk_ids.shape[0], n_experts))
        # Fill 1 to the selected experts
        cnts.scatter_(1, topk_ids, 1)
        tokens_per_expert = cnts.sum(dim=0)
        # Token indices for each expert
        idxs = topk_ids.view(-1).argsort()
        sorted_tokens_shape = idxs.shape + x.shape[1:]
    sorted_tokens = x[idxs // topk_ids.shape[1]]

    return sorted_tokens, idxs, tokens_per_expert

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Apr 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants