-
Notifications
You must be signed in to change notification settings - Fork 24
Open
Description
Hi, I’d like to ask about training efficiency of LLaVA-OneVision-1.5.
If I train the LLaVA-OneVision-1.5 model using LMMS Engine, will the achieved MFU (Model FLOPs Utilization) be higher than training the same model with the Megatron-LM–based LLaVA-OneVision-1.5 training framework? In other words, does LMMS Engine provide better hardware utilization for this model under comparable settings (same GPUs, precision, sequence length, batch size, parallelism strategy, etc.)?
If you have any benchmarking results or guidance on how to reproduce an apples-to-apples MFU comparison (including recommended configs, logging hooks, and how MFU is computed), that would be very helpful. Thanks!
Metadata
Metadata
Assignees
Labels
No labels