Pacha - A TUI Frontend for llama.cpp #2071
mounta11n
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I share with you my little app 'Pacha' and hope that we have something that stays lightweight and terminal based like llama.cpp, but still can provide a minimum of comfort. At some point I just found it annoying to have to type or copy a whole command again for every little difference in parameter value I wanted to test, etc.
There are ready to use binaries for windows, linux and macOS (Intel). Just put it into the same folder as llama.cpp and there you go!
https://github.com/mounta11n/Pacha
This frontend is not meant to be a chat UI or to replace anything, but rather a tool to quickly test a model, a prompt style and/or certain parameters. I think this might be a good first stop to test new models.
The top bar changes its color based on the current cpu usage.
Here is an asciinema demonstration:
The app is currently buggy in some places, but I'm working on it. However, I felt that it is now functional enough that it can be released without any problems.
There are more features planned... like integrating bert.ggml for semantic context and a workspace for training baby-llamas from scratch.
To smart people who are familiar with javascript: Please look over my code and tell me how I can improve the corresponding buggy parts. For example, I just don't figure out why there is a line break after the first chunk in the output. Tried for ages to understand and fix it, but ... Idk. And I don't dare ask GPT-4 anymore. First, I'll be busy debugging GPT's f+cking mistakes more than half the time. And besides, I'm pretty sure my wife will kill me as soon as the next OpenAI bill comes.
Beta Was this translation helpful? Give feedback.
All reactions