Skip to content

fix config #178

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 1, 2025
Merged

fix config #178

merged 1 commit into from
Aug 1, 2025

Conversation

ncfrey
Copy link
Collaborator

@ncfrey ncfrey commented Aug 1, 2025

Description

This pull request includes a configuration update for the UME_large model in the ModernBERT module. The changes adjust several parameters to better align with the intended architecture.

Configuration updates for UME_large model:

  • src/lobster/model/modern_bert/_modern_bert_configuration.py: Updated UME_large model parameters:
    • Reduced num_attention_heads from 25 to 24.
    • Increased intermediate_size from 6400 to 6912.
    • Increased hidden_size from 1600 to 1728.

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Performance improvement
  • Code refactoring

@ncfrey ncfrey requested review from Copilot and karinazad August 1, 2025 16:16
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request fixes configuration parameters for the UME_large model in the ModernBERT module to align with the intended architecture specifications.

Key changes:

  • Corrected attention head count to be divisible by the hidden size
  • Adjusted intermediate and hidden sizes to match proper model dimensions

"num_attention_heads": 25,
"intermediate_size": 6400,
"hidden_size": 1600,
"num_attention_heads": 24,
Copy link
Preview

Copilot AI Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change from 25 to 24 attention heads ensures the hidden_size (1728) is evenly divisible by num_attention_heads (24), which is required for multi-head attention to work correctly. This fixes a potential runtime error where head_dim would not be an integer.

Copilot uses AI. Check for mistakes.

@taylormjs taylormjs merged commit 28af2ce into main Aug 1, 2025
4 checks passed
@taylormjs taylormjs deleted the n/fix-large-onnx branch August 1, 2025 19:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants