-
-
Notifications
You must be signed in to change notification settings - Fork 149
feat: Introduce Typesense-Powered Search, RAG Chat, and Platform Upgrades #594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: Introduce Typesense-Powered Search, RAG Chat, and Platform Upgrades #594
Conversation
…eeded dependencies [RECORD-SEARCH]
…dd search button [RECORD-SEARCH]
…earch function [RECORD-SEARCH]
[RECORD-SEARCH]
|
merged the Practitioner addressbook, looks like this branch has conflicts with main now. |
# Conflicts: # frontend/src/app/app-routing.module.ts # frontend/src/app/app.module.ts # frontend/src/app/components/header/header.component.html # frontend/src/app/services/fasten-api.service.ts
# Conflicts: # docker-compose-prod.yml # docker-compose.yml # frontend/src/app/app-routing.module.ts # frontend/src/app/app.module.ts # frontend/src/app/components/header/header.component.html # frontend/yarn.lock # memory-bank/activeContext.md # memory-bank/productContext.md # memory-bank/progress.md # memory-bank/systemPatterns.md # memory-bank/techContext.md
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
|
All alerts resolved. Learn more about Socket for GitHub. This PR previously contained dependency changes with security issues that have been resolved, removed, or ignored. |
# Conflicts: # frontend/angular.json
|
looks like theres conflicts again |
# Conflicts: # backend/pkg/web/server.go # docker-compose-prod.yml # docker-compose.yml # frontend/src/app/app-routing.module.ts # frontend/src/app/app.module.ts # frontend/src/app/components/header/header.component.ts # frontend/src/assets/scss/dark/_components.scss # frontend/yarn.lock
|
@AnalogJ We have fixed the conflicts, thank you! |
Summary
This PR introduces powerful new ways for users to interact with their health data through advanced search and conversational chat. It also includes significant architectural improvements and technology stack upgrades to enhance performance, security, and maintainability.
New Features: Advanced Search & Conversational Chat
The core of this update is the integration of Typesense, which powers two major new features designed to make health data more accessible and useful:
typesense-jslibrary. Typesense, in turn, cooperates with a configured Large Language Model (LLM) to provide the complete RAG-based chat functionality. This approach requires an external or local LLM, and we recommend Ollama for running local models like Llama 3.1 8B. To enable stateful, follow-up questions, Typesense stores conversation history in a dedicated collection, allowing the RAG pipeline to maintain context. More information on topic: https://typesense.org/docs/29.0/api/conversational-search-rag.htmlArchitectural Improvements
To make the platform more robust, flexible, and easier to manage, we've made the following key improvements:
/api/envendpoint. Previously, the frontend's environment was static, bundled from a file at build time. This change was critical to allow features like Search and Chat to be enabled or disabled from a single place (config.yaml) without requiring a rebuild of the frontend application.config.yaml, ensuring the core application remains accessible to all users while allowing power users to enable advanced functionalities.config.yamlhas been introduced to handle all variables related to Typesense, search, and chat, making it easier to manage these features.Platform & Dependency Upgrades
To keep the platform modern, secure, and performant, we have upgraded core components of our tech stack:
typesense-goclient library for the backend, and thetypesense-jslibrary for the frontend.How to Enable Search and Chat
To get started with the new search and chat functionalities, follow these steps.
Part 1: Set Up a Local LLM with Ollama
The conversational chat feature requires a local Large Language Model (LLM) that exposes an OpenAI-compatible API. We recommend using Ollama for a straightforward setup.
Prerequisites:
vLLMare also viable alternatives.Steps:
Download and Install Ollama:
Pull a Language Model:
llama3.1:8bas a starting point:Run the Model:
fasten-onpremwill use for the chat functionality.Part 2: Configure and Run Fasten On-Prem
Prerequisites:
dockeranddocker composemust be installed on your system.Steps:
Download Configuration Files:
fasten-onprem, download the following files:docker-compose-prod.yml:config.yaml:Customize
config.yaml:config.yamlin a text editor.searchblock.vllm_urlshould point to your local LLM's API endpoint. If you followed the Ollama setup above, the default URL should work correctly without any changes.Start the Application:
Demo video bellow
chat-demo.mov