Blue Moon Release #138
pwgit-create
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Release Notes 1.7.2
There have been three additional AI /ollama configurations that can be edited within the same file. Those are:
Install AppWish Ollama
Start Appwish Ollama
Do you have less than 8gb of ram?
If you plan on running the Linux AMD x64 version with less than 8GB of ram, you may experience slow response times from the AI models. In order to achieve faster speeds, consider running a lighter model (llama3 8b). The raspberry pie version already has that model defaulted to.
How can I change the model to LLama 3 for the Linux AMD X64 version?
Run the install script that installed ollama before typing
ollama pull llama3:latestin your terminal.Edit the text in the file using the path:
src/main/resources/ollama_model.propsFrom
MODEL_NAME=codestral:22bInto
MODEL_NAME=llama3:latestHelper script for Windows Subsystem for Linux (WSL)
This script is designed to help you run Appwish Ollama using WSL (wsl_helper_script.sh).
If you have a decent Nvidia GPU, you can run NvidiaCUDA with WSL without much setup. If you're looking for a very fast app generation, this option is a good choice.
Running this script is not recommended if you have no intention of using WSL.
Have fun with the release and generate apps in a responsible manner. 🐲 🔮 🌌
This discussion was created from the release Blue Moon Release.
Beta Was this translation helpful? Give feedback.
All reactions