Computer specifications
Functionnalities
Install on Linux
Install on Windows
Operating System: Linux, Mac or Windows
Memory (RAM): 8GB
Processor: A relatively modern CPU (5 years, 4 cores)
GPU: Integrated GPU works but runs slow
Operating System: Linux, Mac or Windows
Memory (RAM): 16GB
Processor: A relatively modern CPU (5 years, 8 cores)
GPU: Dedicated GPU 6GB VRAM minimal (CUDA is the best)
- Respond to a prompt with a continuous context (may me innapropriate because of the model)
- Change model inside the webapp
curl -fsSL https://ollama.com/install.sh | sh
ollama serve
You can find models at ollama.com/search.
Once you found your model run it and test some prompts to make sure it runs.
ollama run <model name>
git clone https://github.com/asseukihuh/ai-webapp
python -m http.server 8000
Go to the adress localhost:8000 on your navigator.
Find the repository where index.html is located.
And there is your local ai-webapp.
Install ollama via ollama.com
ollama serve
You can find models at ollama.com/search.
Once you found your model run it and test some prompts to make sure it runs.
ollama run <model name>
git clone https://github.com/asseukihuh/ai-webapp
python -m http.server 8000
Go to the adress localhost:8000 on your navigator. (you can chose the path where the localhost is via --directory "")
Find the repository where index.html is located.
And there is your local ai-webapp.