Skip to content

⚕️GenAI powered multi-agentic medical diagnostics and healthcare research assistance chatbot. 🏥 Designed for healthcare professionals, researchers and patients.

License

Notifications You must be signed in to change notification settings

souvikmajumder26/Multi-Agent-Medical-Assistant

Repository files navigation

logo

⚕️ Multi-Agent-Medical-Assistant :
AI-powered multi-agentic system for medical diagnosis and assistance

Python - Version LangGraph - Version LangChain - Version Qdrant Client - Version Pydantic - Version FastAPI - Version Docling - Version Generic badge GitHub Issues Contributions welcome


Important

📋 Version Updates from v2.0 to v2.1 and further:

  1. Document Processing Upgrade: Unstructured.io has been replaced with Docling for document parsing and extraction of text, tables, and images to be embedded.
  2. Enhanced RAG References: Links to source documents and reference images present in reranked retrieved chunks stored in local storage are added to the bottom of the RAG responses.

To use Unstructured.io based solution, refer release - v2.0.

📚 Table of Contents


📌 Overview

The Multi-Agent Medical Assistant is an AI-powered chatbot designed to assist with medical diagnosis, research, and patient interactions.

🚀 Powered by Multi-Agent Intelligence, this system integrates:

  • 🤖 Large Language Models (LLMs)
  • 🖼️ Computer Vision Models for medical imaging analysis
  • 📚 Retrieval-Augmented Generation (RAG) leveraging vector databases
  • 🌐 Real-time Web Search for up-to-date medical insights
  • 👨‍⚕️ Human-in-the-Loop Validation to verify AI-based medical image diagnoses

What You’ll Learn from This Project 📖

🔹 👨‍💻 Multi-Agent Orchestration with structured graph workflows
🔹 🔍 Advanced RAG Techniques – hybrid retrieval, semantic chunking, and vector search
🔹 ⚡ Confidence-Based Routing & Agent-to-Agent Handoff
🔹 🔒 Scalable, Production-Ready AI with Modularized Code & Robust Exception Handling

📂 For learners: Check out agents/README.md for a detailed breakdown of the agentic workflow! 🎯


💫 Demo

Multi-Agent-Medical-Assistant-v1-with-voiceover-compressed.mp4

If you like what you see and would want to support the project's developer, you can Buy Me A Coffee ! :)

📂 For an even more detailed demo video: Check out Multi-Agent-Medical-Assistant-v1.9. 📽️


🛡️ Technical Flow Chart

Technical Flow Chart


✨ Key Features

  • 🤖 Multi-Agent Architecture : Specialized agents working in harmony to handle diagnosis, information retrieval, reasoning, and more

  • 🔍 Advanced RAG Retrieval System :

    • Docling based parsing to extract text, tables, and images from PDFs.
    • Embedding markdown formatted text, tables and LLM based image summaries.
    • LLM based semantic chunking with structural boundary awareness.
    • LLM based query expansion with related medical domain terms.
    • Qdrant hybrid search combining BM25 sparse keyword search along with dense embedding vector search.
    • Input-output guardrails to ensure safe and relevant responses.
    • Confidence-based agent-to-agent handoff between RAG and Web Search to prevent hallucinations.
  • 🏥 Medical Imaging Analysis

    • Brain Tumor Detection (TBD)
    • Chest X-ray Disease Classification
    • Skin Lesion Segmentation
  • 🌐 Real-time Research Integration : Web search agent that retrieves the latest medical research papers and findings

  • 📊 Confidence-Based Verification : Log probability analysis ensures high accuracy in medical recommendations

  • 🎙️ Voice Interaction Capabilities : Seamless speech-to-text and text-to-speech powered by Eleven Labs API

  • 👩‍⚕️ Expert Oversight System : Human-in-the-loop verification by medical professionals before finalizing outputs

  • ⚔️ Input & Output Guardrails : Ensures safe, unbiased, and reliable medical responses while filtering out harmful or misleading content

  • 💻 Intuitive User Interface : Designed for healthcare professionals with minimal technical expertise

Note

Upcoming features:

  1. Brain Tumor Medical Computer Vision model integration.
  2. Open to suggestions and contributions.

🛠️ Technology Stack

Component Technologies
🔹 Backend Framework FastAPI
🔹 Agent Orchestration LangGraph
🔹 Document Parsing Docling
🔹 Knowledge Storage Qdrant Vector Database
🔹 Medical Imaging Computer Vision Models
• Brain Tumor: Object Detection (PyTorch)
• Chest X-ray: Image Classification (PyTorch)
• Skin Lesion: Semantic Segmentation (PyTorch)
🔹 Guardrails LangChain
🔹 Speech Processing Eleven Labs API
🔹 Frontend HTML, CSS, JavaScript
🔹 Deployment Docker, GitHub Actions CI/CD

🚀 Installation & Setup

📌 Option 1: Using Docker

Prerequisites:

  • Docker installed on your system
  • API keys for the required services

1️⃣ Clone the Repository

git clone https://github.com/souvikmajumder26/Multi-Agent-Medical-Assistant.git
cd Multi-Agent-Medical-Assistant

2️⃣ Create Environment File

  • Create a .env file in the root directory and add the following API keys:

Note

You may use any llm and embedding model of your choice...

  1. If using Azure OpenAI, no modification required.
  2. If using direct OpenAI, modify the llm and embedding model definitions in the 'config.py' and provide appropriate env variables.
  3. If using local models, appropriate code changes might be required throughout the codebase especially in 'agents'.

Warning

Ensure the API keys in the .env file are correct and have the necessary permissions.

# LLM Configuration (Azure Open AI - gpt-4o used in development)
# If using any other LLM API key or local LLM, appropriate code modification is required
deployment_name = 
model_name = gpt-4o
azure_endpoint = 
openai_api_key = 
openai_api_version = 

# Embedding Model Configuration (Azure Open AI - text-embedding-ada-002 used in development)
# If using any other embedding model, appropriate code modification is required
embedding_deployment_name =
embedding_model_name = text-embedding-ada-002
embedding_azure_endpoint = 
embedding_openai_api_key = 
embedding_openai_api_version = 

# Speech API Key (Free credits available with new Eleven Labs Account)
ELEVEN_LABS_API_KEY = 

# Web Search API Key (Free credits available with new Tavily Account)
TAVILY_API_KEY = 

# Hugging Face Token - using reranker model "ms-marco-TinyBERT-L-6"
HUGGINGFACE_TOKEN = 

# (OPTIONAL) If using Qdrant server version, local does not require API key
QDRANT_URL = 
QDRANT_API_KEY = 

3️⃣ Build the Docker Image

docker build -t medical-assistant .

4️⃣ Run the Docker Container

docker run -d \
  --name medical-assistant-app \
  -p 8000:8000 \
  --env-file .env \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/uploads:/app/uploads \
  medical-assistant

The application will be available at: http://localhost:8000

5️⃣ Ingest Data into Vector DB from Docker Container

  • To ingest a single document:
docker exec medical-assistant-app python ingest_rag_data.py --file ./data/raw/brain_tumors_ucni.pdf
  • To ingest multiple documents from a directory:
docker exec medical-assistant-app python ingest_rag_data.py --dir ./data/raw

Managing the Container:

Stop the Container

docker stop medical-assistant-app

Start the Container

docker start medical-assistant-app

View Logs

docker logs medical-assistant-app

Remove the Container

docker rm medical-assistant-app

Troubleshooting:

Container Health Check

The container includes a health check that monitors the application status. You can check the health status with:

docker inspect --format='{{.State.Health.Status}}' medical-assistant-app

Container Not Starting

If the container fails to start, check the logs for errors:

docker logs medical-assistant-app

📌 Option 2: Manual Installation

1️⃣ Clone the Repository

git clone https://github.com/souvikmajumder26/Multi-Agent-Medical-Assistant.git  
cd Multi-Agent-Medical-Assistant  

2️⃣ Create & Activate Virtual Environment

  • If using conda:
conda create --name <environment-name> python=3.11
conda activate <environment-name>
  • If using python venv:
python -m venv <environment-name>
source <environment-name>/bin/activate  # For Mac/Linux
<environment-name>\Scripts\activate     # For Windows  

3️⃣ Install Dependencies

Important

ffmpeg is required for speech service to work.

  • If using conda:
conda install -c conda-forge ffmpeg
pip install -r requirements.txt  
  • If using python venv:
winget install ffmpeg
pip install -r requirements.txt  

4️⃣ Set Up API Keys

  • Create a .env file and add the required API keys as shown in Option 1.

5️⃣ Run the Application

  • Run the following command in the activate environment.
python app.py

The application will be available at: http://localhost:8000

6️⃣ Ingest additional data into the Vector DB

Run any one of the following commands as required.

  • To ingest one document at a time:
python ingest_rag_data.py --file ./data/raw/brain_tumors_ucni.pdf
  • To ingest multiple documents from a directory:
python ingest_rag_data.py --dir ./data/raw

🧠 Usage

Note

  1. The first run can be jittery and may get errors - be patient and check the console for ongoing downloads and installations.
  2. On the first run, many models will be downloaded - yolo for tesseract ocr, computer vision agent models, cross-encoder reranker model, etc.
  3. Once they are completed, retry. Everything should work seamlessly since all of it is thoroughly tested.
  • Upload medical images for AI-based diagnosis. Task specific Computer Vision model powered agents - upload images from 'sample_images' folder to try out.
  • Ask medical queries to leverage retrieval-augmented generation (RAG) if information in memory or web-search to retrieve latest information.
  • Use voice-based interaction (speech-to-text and text-to-speech).
  • Review AI-generated insights with human-in-the-loop verification.

🤝 Contributions

Contributions are welcome! Please check the issues tab for feature requests and improvements.


⚖️ License

This project is licensed under the Apache-2.0 License. See the LICENSE file for details.


📝 Citing

@misc{Souvik2025,
  Author = {Souvik Majumder},
  Title = {Multi Agent Medical Assistant},
  Year = {2025},
  Publisher = {GitHub},
  Journal = {GitHub repository},
  Howpublished = {\url{https://github.com/souvikmajumder26/Multi-Agent-Medical-Assistant}}
}

📬 Contact

For any questions or collaboration inquiries, reach out to Souvik Majumder on:

🔗 LinkedIn: https://www.linkedin.com/in/souvikmajumder26

🔗 GitHub: https://github.com/souvikmajumder26

🔝 Return