Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't allow zero as temperature in ModelSettings.qml #2777

Closed

Conversation

cosmic-snow
Copy link
Collaborator

@cosmic-snow cosmic-snow commented Jul 30, 2024

Describe your changes

A number lower than 0.000001 is set to that value instead.

Issue ticket number and link

N/A

Checklist before requesting a review

  • I have performed a self-review of my code.
  • If it is a core feature, I have added thorough tests.
  • I have added thorough documentation for my code.
  • I have tagged PR with relevant project labels. I acknowledge that a PR without labels may be dismissed.
  • If this PR addresses a bug, I have provided both a screenshot/video of the original bug and the working solution.

Demo

Editing:
image

After edit:
image

Steps to Reproduce

Right now, setting temperature to zero is allowed, but it shouldn't be.

Notes

This PR is meant to be seen as a suggestion for now.

I'm setting this as draft for the following reasons:

  • Maybe this should be handled in the backend instead, and maybe 0 as temperature should be allowed again.
  • I think it prevents setting zero as a value, but it seems to be flaky in the UI regardless (e.g. when I Alt-Tab).
  • I have not figured out how to get rid of the scientific notation after it corrects itself.

A number lower than 0.000001 is set to that value instead.

Signed-off-by: Cosmic Snow <[email protected]>
@cosmic-snow cosmic-snow added the chat gpt4all-chat issues label Jul 30, 2024
- Using .toLocaleString() gets rid of the scientific notation display
- Setting TextField's text when correcting the value fixes flakiness

Signed-off-by: Cosmic Snow <[email protected]>
@cosmic-snow
Copy link
Collaborator Author

cosmic-snow commented Jul 30, 2024

Second commit improves points 2 & 3 from the notes.

The improved behaviour uses .toLocaleString(), although it's probably the wrong locale. I think it should be the one from the selected translation, once those are ready.

Edit: Parsing might need some changes with different locales, too.

Edit 2: Turns out you're using QLocale::setDefault() in the translation logic and QML's Qt.locale() does the right thing with that.

- make validator locale-aware
- do locale processing everywhere a conversion between double and string happens
- Qt's `Number.toLocaleString()` with a string as locale is buggy; use its extension instead

Signed-off-by: Cosmic Snow <[email protected]>
@cosmic-snow
Copy link
Collaborator Author

Should be good now, apart from a minor cosmetic glitch (rare corner case).

I tested it by adding to:

MySettings::globalInstance()->setLanguageAndLocale();

on line 54:

QLocale::setDefault(QLocale("de_DE"));

Try to break it, hope these should not be possible anymore:

  • No temperature of 0 or below.
  • Only commas are allowed as decimal separator with de_DE as locale.

Although note: this is not hooked up to translation dropdown/locale changes. That would be the next thing to work on for all input fields which need to be locale-aware.

@cosmic-snow cosmic-snow marked this pull request as ready for review August 1, 2024 12:48
Copy link
Member

@cebtenzzre cebtenzzre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am hesitant to disable functionality of built-in and custom OpenAI models, and to add llama.cpp-specific assumptions to the UI, just to avoid branching to the built-in llama_sample_token_greedy when the temperature is zero here. I think this problem can actually be properly fixed in fewer lines of code than this workaround.

@cosmic-snow
Copy link
Collaborator Author

@cebtenzzre Just to be clear, part of this is also making the UI input locale-aware (decimal separator can vary).

The primary goal here was to fix a bug: there's a division by zero, which is getting hit right now in the backend (oddly seems to only throw off debug builds, but that's my builds). Greedy sampling with temp 0 was once possible in GPT4All (sometime last year), I don't know when exactly that changed.

I am not at all opposed to removing the limit again if/once there's a fix for that in the backend -- actually, I am very much in favour of being allowed to set it to 0.

@cebtenzzre
Copy link
Member

@​cebtenzzre Just to be clear, part of this is also making the UI input locale-aware (decimal separator can vary).

That's appreciated. If you'd like to make a PR or change this one such that all four instances of parseFloat in ModelSettings.qml are replaced with something locale-aware, that'd be great.

The primary goal here was to fix a bug: there's a division by zero, which is getting hit right now in the backend (oddly seems to only throw off debug builds, but that's my builds). Greedy sampling with temp 0 was once possible in GPT4All (sometime last year), I don't know when exactly that changed.

But only llama.cpp divides by zero, because of a missing branch. We are actively working (#2806) towards having first-class support for backends other than llamamodel.cpp, so I feel like adding more llamamodel-specific assumptions to the UI code is a step in the wrong direction. Both ollama and OpenAI support temp=0.

I am not at all opposed to removing the limit again if/once there's a fix for that in the backend -- actually, I am very much in favour of being allowed to set it to 0.

Okay, done: #2854

@cosmic-snow
Copy link
Collaborator Author

That's appreciated. If you'd like to make a PR or change this one such that all four instances of parseFloat in ModelSettings.qml are replaced with something locale-aware, that'd be great.

Yes, that was the idea after this one. Although there may be one other thing to consider: In this PR (and probably the next) I had to remove the direct connection in the QML (text: root.currentModelInfo.temperature) because of the parsing and conversion.

An alternative would be to have the model be "more closely aligned" with the UI and do the conversions in C++ instead of JS.

In any case, with merging of #2854 this PR is now obsolete.

@cebtenzzre
Copy link
Member

Although there may be one other thing to consider: In this PR (and probably the next) I had to remove the direct connection in the QML (text: root.currentModelInfo.temperature) because of the parsing and conversion.

That shouldn't be a problem, since the binding is removed once onTemperatureChanged or onCurrentModelInfoChanged fires anyway. As long as the value is never initially blank—in which case, you may need a Component.onCompleted handler.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chat gpt4all-chat issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants