Replies: 1 comment 2 replies
-
Whats your embedding model? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey everyone 👋
I'm running a HYBRID search (can be replaced by any other SEARCH MODE) with
top_k=40
usingnanovector
DB and testing queries on a small/medium-sized vector DB (~400 entities). My query is a single word, specifically anentity_name
like"ID_B_123"
. This query/prompt is also a db node. This niode has some random node description that includes theentity_name
again.However, none of the search methods (
global
,hybrid
, ormix
) will return this relevant node as part of the results (especially when the db nodes are growing).Has anyone else faced this issue?
I guess thats because the embedding of the prompt (just the entity name) isn't semantically rich enough to match well with the node description — even though the user is clearly requesting a very specific entity. In practice, this node should be ranked very high, possibly even with an importance score of
1.0
.I'm curious how others are solving this type of edge case.
My ideas so far:
lightrag.query
, check if the user input matches nodesentity_name
. If so, uselightrag.get_entity_info(..)
, extract the description and inject it into the prompt — so the scoring reflects that direct match from similarity of0.1
to0.95
(a wild guess).Would love to hear your thoughts or solutions if you've dealt with something similar.
Best,
req.
Beta Was this translation helpful? Give feedback.
All reactions