Skip to content

[FR] Improve UI support for reasoning models (local AI) #8125

@yar85

Description

@yar85

Description

Some models such as Qwen3 respond with reasoning text between <think> tags. Currently, this part of the response is displayed as is, making the chat difficult to read.
It would be great to keep the thinking mode (it makes the answers better), but hide thoughts in the AI responses.

Impact

This improvement will be useful for all users using local AI models with reasoning capabilities.

Additional Context

With ollama, this can be achieved in one of two ways:

  • stripping <think> tags completely, the options of which depend on the type of interaction with ollama: running a model with --hidethinking parameter (CLI) / ignoring the thinking field in the response object (JSON API) / replacing it with an empty string using regex;
  • wrapping the reasoning text in an expandable text section above the chat response, similar to ollama-webui and other LLM client apps.

Of course, the second way is preferable. But if it will be inconvenient in some places of the UI - the first is also ok.

Metadata

Metadata

Assignees

No one assigned

    Labels

    AIfeatures built on top of artificial intelligence technology

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions