Official implementation of our QAEncoder method for more advanced QA systems.
Modern QA systems entail retrieval-augmented generation (RAG) for accurate and trustworthy responses. However, the inherent gap between user queries and relevant documents hinders precise matching. Motivated by our conical distribution hypothesis, which posits that potential queries and documents form a cone-like structure in the embedding space, we introduce QAEncoder, a training-free approach to bridge this gap. Specifically, QAEncoder estimates the expectation of potential queries in the embedding space as a robust surrogate for the document embedding, and attaches document fingerprints to effectively distinguish these embeddings. Extensive experiments on fourteen embedding models across six languages and eight datasets validate QAEncoder's alignment capability, which offers a plug-and-play solution that seamlessly integrates with existing RAG architectures and training-based methods.
Set up the environment and run the demo script:
git clone https://github.com/IAAR-Shanghai/QAEncoder.git
cd QAEncoder
conda create -n QAE python==3.8
pip install -r requirements-demo.txt
python demo.py
Results should be like:
This work is currently under review and code refactoring. We plan to fully open-source our project in order.
- Release Demo
- Release QAEncoder codes and datasets
- Release QAEncoder codes compatible with Llamaindex and Langchain
- Release QAEncoder++, our future works
@article{wang2024qaencoder,
title={QAEncoder: Towards Aligned Representation Learning in Question Answering System},
author={Wang, Zhengren and Yu, Qinhan and Wei, Shida and Li, Zhiyu and Xiong, Feiyu and Wang, Xiaoxing and Niu, Simin and Liang, Hao and Zhang, Wentao}
journal={arXiv preprint arXiv:2409.20434},
year={2024}
}