Search and identify contacts by physical traits, gathered from pictures with AI.
- Docker/Docker Compose
makecommand
make prepare-env
- Install the NVIDIA Container Toolkit.
make build-gpu
make build
make up
This project uses a vision-capable (multimodal) model. You can explore all available options here to find one that best fits your use case.
By default, the project uses gemma3:12b-it-qat. To change it, just update OLLAMA_MODEL in your .env or shell.
Make sure to pick a model size that your GPU/CPU and memory can comfortably support.
You can try the project online at: https://eaglai.griffin-frog.ts.net/.
- Database refresh: The database is automatically refreshed every hour. All data will be reset at that time.
- Mock Ollama responses: This demo uses the mock configuration, so the Ollama API responses are static and not representative of any actual parsed facial features.
⚠️ Warning: Do not enter any personal or sensitive information. All data is temporary and publicly accessible.
This project is licensed under the MIT License.
