The DQA aka difficult questions attempted project utilises large language models (LLMs) to perform multi-hop question answering (MHQA). This project has been inspired by the tutorial 1 and the article 2, both by Dean Sacoransky.
Following is a table of some updates regarding the project status. Note that these do not correspond to specific commits or milestones.
Date | Status | Notes or observations |
---|---|---|
June 7, 2025 | active | Changed package and project manager to uv from poetry . Changed LLM orchestration to LangChain/LangGraph from DSPy. Added supporting MCP server. |
February 15, 2025 | active | Custom adapter added for Deepseek models. |
January 26, 2025 | active | LlamaIndex Workflows replaced by DSPy. |
September 21, 2024 | active | Workflows made selectable. |
September 13, 2024 | active | Low parameter LLMs perform badly in unnecessary self-discovery, query refinements and ReAct tool selections. |
September 10, 2024 | active | Query decomposition may generate unnecessary sub-workflows. |
August 31, 2024 | active | Using built-in ReAct agent. |
August 29, 2024 | active | Project started. |
The directory where you clone this repository will be referred to as the working directory or WD hereinafter.
Install uv. To install the project with its essential dependencies in a virtual environment, run the following in the WD. To install all non-essential dependencies, add the --all-extras
flag to the following command.
uv sync
In addition to project dependencies, see the installation instructions of Ollama. You can also install it on a separate machine. Download a tool calling model of Ollama that you want to use, e.g., llama3.1
or mistral-nemo
.
Following is a list of environment variables that can be used to configure the DQA application. All environment variables should be supplied as quoted strings. They will be interpreted as the correct type as necessary.
For environment variables starting with GRADIO_
, See Gradio documentation for environment variables.
When running the provided MCP server, the following environment variables can be specified, prefixed with FASTMCP_SERVER_
: HOST
, PORT
, DEBUG
and LOG_LEVEL
. See key configuration options for FastMCP. Note that on_duplicate_
prefixed options specified as environment variables will be ignored.
Variable | [Default value] and description |
---|---|
DQA_MCP_CLIENT_CONFIG |
[config/mcp-client.json ] The path to the config file for providing a set of MCP servers. |
DQA_USE_MCP |
[True ] If this is set to True, then make sure that the MCP client configuration points to available MCP server(s). You may want to run the provided MCP server. See below for instructions regarding that. |
DQA_LLM_CONFIG |
[config/chat-ollama.json ] The path to the config file for the Ollama LLM provider. |
LLM__OLLAMA_MODEL |
[mistral-nemo] See the available models. The model must be available on the selected Ollama server. The model must support tool calling. |
DQA_MCP_SERVER_TRANSPORT |
[sse] The acceptable options are either sse or streamable-http . |
The structure of the MCP configuration file is as follows. Note that while the configuration supports the stdio
transport, the DQA agent is configured to support only streamable_http
or sse
, which are also the allowed modes for the DQA MCP server. For examples of the configuration, see the LangChain MCP adapters.
{
"<name>": {
"transport": "<Either streamable_http or sse>",
"url": "<url-of-the-remote-server>"
}
}
The structure of the LLM config JSON file is as follows. The full list of acceptable configuration parameters for the Ollama LLM is available in the LangChain documentation.
{
"baseUrl": "<model-provider-url>",
"model": "<model-name>"
}
Create a .env
file in the WD, to set the environment variables as above, if you want to use anything other than the default settings.
Run the following in the WD to start the MCP server.
uv run dqa-mcp
Run the following in the WD to start the web server.
uv run dqa-webapp
The web UI will be available at http://localhost:7860. To exit the server, use the Ctrl+C key combination.
Install pre-commit
for Git and ruff
. Then enable pre-commit
by running the following in the WD.
pre-commit install
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
MIT.