You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 24, 2026. It is now read-only.
I'm using WhisperKitAndroid with the OPENAI_BASE multilingual model and it's working great for language detection. However, I'm running into an issue where it always translates non-English audio to English instead of transcribing it in the original language.
For example, when I speak in Spanish, I get:
Language detected: es ✅
Task mode: <|translate|> ❌
Output: English translation instead of Spanish transcription
I'd like to get Spanish text when I speak Spanish, French text when I speak French, etc.
Looking at the Builder API, I don't see options to configure the task mode (transcribe vs translate) or language settings. The WhisperAX sample app has selectedTask and selectedLanguage state but they don't seem to be used in the actual WhisperKit configuration.
Is there a way to:
Force transcription mode instead of translation mode?
Set language to null for auto-detection with transcription?