Idea to improve the impact of our work as language leaders #1889
Replies: 5 comments 4 replies
-
This would definitely help! Here in Norway we have a lot of dialects, and people write differently (and with typos). It's challenging for me to keep track of things that don't work. Although we all understand what's written, certain words can be spelled differently depending on which part of the country you live in. I've noticed people complaining in forums, facebook etc, and I've encouraged some of them to either make a PR themselves, or for simple things, I've fixed it myself. The key here is that's much easier with input/feedback from the users, and it would be nice to have an official feedback channel for this. |
Beta Was this translation helpful? Give feedback.
-
I fully agree we should give more feedback to the users, and help them in solving the issues. But, like you already said, we should prevent bringing a lot of workload to the language leaders. We as language leaders can provide translations, and tweak the sentences for supported intents so that as many ways to ask eg turn on a light are supported. However we can not add support for new intents, because this requires changes we are unable to make. We all understand the YAML structure and the way the intents are set up, but to support an intent changes to the intent processing and HA Core are needed, which most of will probably be unable to do. For example, from the start of the whole intents repo, over a year ago, there have been intents created to set the temperature of a climate entity. But this is not supported yet. So, basically our task as language leader to provide translations is done. However if a user will try to do this, he will get the On the other end I do agree it would be good if we, as language leaders, would have an feedback channel for cases where described by @LaStrada In any case I think it's really good that there will be more focus on this, and that the users will get better understanding of why something is not working. The suggestion about getting the top 10 lists of failing sentences does sound like a good middle ground. It will also give some more insight in which domains or intents should get focus to get implemented. For example, I can imagine a lot of people try to control a media player now, which is not working. |
Beta Was this translation helpful? Give feedback.
-
This is an excellent sum-up with very good suggestions. It does, however, open some new questions. How can you tell if something is solvable by the user or not?A sentence may not be supported or it may have to do with an unexposed entity, a miss-spelled alias, a lack of assignment to an area for an entity or other reasons. Can we surely determine if any sentence falls under the user-fixable category? Obviously, sentence formats differ between languages. Can we actually figure out what the user was trying to achieve?I'll provide an example in English. Say someone has a
There may be other cases, but a single pair of overlapping sentences ( My suggestionI love the "Send this sentence to your language leader" functionality. Obviously, we need to check for duplicates to eliminate spam and to make sure there is a language leader to handle the issue, but otherwise it's great! However, I'd not draw the line between user-solvable and non-user-solvable. I'd let the user try to debug a sentence, provide assessments based on all defined sentences (including custom sentences and sentence triggers, btw, since they can surely alter the behavior). Watch out for the fact that sentence triggers don't have an assigned language (yet, at least). The tool could list all possible matching sentences (again, there could be multiple sentences, as I listed in the table above) along with the causes that prevent matching and any steps the user might take to help with matching and retry logic. If all steps are exhausted without any favorable result or if there are no match candidates (sentences), only then I'd allow the users to report the issue upstream, to language leaders. For the mistmatched STT, Mike's already been working on some "similar sentence" mechanism that, when applied to phonemes instead of letters (see the discussion in the comments), could greatly aid in such cases. |
Beta Was this translation helpful? Give feedback.
-
In general I'm not against any of the proposed solution, maybe not a great fan of using the "Assist failed because <***>" mechanism as a solution from language leader's perspective , sure it's usable to self-debug things from the user's perspective, this still very much CAN or CAN NOT be reproduced on my end, it's a lottery. If it's intended only for user needs, yeah this is fantastic for the end user, I strongly agree. But from a language leader's perspective it's way more valuable to have some sort of debug data sent to github/issue board/whatever (given user did consent to this), even just as a templates issue where we can discuss it and/or triage it, maybe get some more details from this or other users as well. |
Beta Was this translation helpful? Give feedback.
-
(As a user) I would totally be fine with that. Some thoughts:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Morning @home-assistant/language-leaders !
(I'm JLo, I work at Nabu Casa as product manager for Home Assistant)
I am currently thinking about a solution with @matthiasdebaat and would like your feedback before going further.
Problem(s)
I'll present two problems, only one of these is focused on Language leaders but the solution I'll explain further down solves both
Solution
The first problem is partly solved with our better error handling that will soon be available in the next release of home assistant.
This:
This means that a user will have bits of context about how to solve a particular issue.
A dummy example:
If you say "Turn off the lights in the bathroom" and assist replies "Sorry, I am not aware of any lights in the bathroom area", it's a clue on how to solve this: You need to assign lights in the bathroom area.
It's already a great start.
I am thinking about going one step further:
I'd like to provide a summary of all the sentences that failed on a user system (Somewhere ... We don't know where yet, we are brainstorming with @matthiasdebaat)
This list of unmatched sentences would be split into two parts:
Solvable by the user
Here the idea is pretty simple, it's to create resolutions flows tailored to a specific error that can be solved directly by the user.
A few examples to illustrate:
spaceship
", but I do not know any area calledspaceship
spaceship
as an aliascurtains
in theliving room
", I am aware of theliving room
area and you do havecurtains
... but none of them are assigned to theliving room
curtains
, pick the ones you want to assign to theliving room
area and I'll do it.Not solvable by the user
This is the part where I need your feedback.
I see all the sentences said by our users that currently do not match any intent as very actionable data.
And I would like to use this data in a way.
Of course, while keeping the privacy DNA in mind, we cannot just "push" this data somewhere without user consent - That's not us.
What I am thinking about is to
Pretty simple.
We'll need to be pretty clear in the wording and UX behind this button, I don't see this button as a "Report bug" that requires a fix at some point in time.
I see it more as a data point that a user I willing to share with you: "I was expecting this sentence to work, and it does not" (End of the story)
Pros
Cons
What do you think?
Would you like to know to top 10 failing sentences in your language to work on them?☺️
Do you think the solution solves the problem?
Feel free to discuss here
Thanks a lot!
JLo
Beta Was this translation helpful? Give feedback.
All reactions