You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I came across the highly creative project which addresses the vector search inside PosgreSQL, one of the most popular open source databases.
Intel Xeon platform supports AMX instruction which can accelerate vector inner-product computation significantly. May I know if we have a plan to speed up pgvevtor.rs with the advanced CPU instruction? @gaocegege
The text was updated successfully, but these errors were encountered:
I had some investigation with AMX before. The conclusion I got is that AMX won't provide speed up for dot product. It's possible to provide some speed up but not too much on matrix vector multiplication. Is this right?
We're interested in AMX but doesn't know it a lot. Would be nice if we can have some collaborations here!
Regarding the gain from AMX instruction, it may depend on the scenario and usage method.
May I know the vectors are stored in continuous memory in column oriented? If it is, AMX may speed up ~4X for single query in index flat algorithm per our estimation.
The storage layout is flexible for us, and we're also doing some new algorithm on IVF. Therefore we're definitely interested on the acceleration for vector matrix distance computation. Is there any real world AMX code we can learn from?
I came across the highly creative project which addresses the vector search inside PosgreSQL, one of the most popular open source databases.
Intel Xeon platform supports AMX instruction which can accelerate vector inner-product computation significantly. May I know if we have a plan to speed up pgvevtor.rs with the advanced CPU instruction? @gaocegege
The text was updated successfully, but these errors were encountered: