Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to make it faster #6

Open
scrawnyether5669 opened this issue Apr 16, 2023 · 6 comments
Open

how to make it faster #6

scrawnyether5669 opened this issue Apr 16, 2023 · 6 comments

Comments

@scrawnyether5669
Copy link

i installed the latest version and its a cool app but it so slow I'm running the vicuna 7b is there a way to make faster i have an 8gb ram phone and what other models does support and please link me to them

@dsd
Copy link

dsd commented Jul 1, 2023

I have a branch that moves more of the processing into native code, I believe it should bring a noticable performance improvement. You can also try 3B models with this version, which should also be much faster. Feel free to try.
Note that the new llama.cpp changes model compatibility, models that used to work with Sherpa probably don't work any more until conversion.
Pull request: #12
apk available: https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd2

@realcarlos
Copy link

I have a branch that moves more of the processing into native code, I believe it should bring a noticable performance improvement. You can also try 3B models with this version, which should also be much faster. Feel free to try. Note that the new llama.cpp changes model compatibility, models that used to work with Sherpa probably don't work any more until conversion. Pull request: #12 apk available: https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd2

Hi dsd, it works with the apk you provided , but I failed to run it from your forked source.
and when I run on my Mac , it shows "Library not loaded: @rpath/libllama.dylib"

@dsd
Copy link

dsd commented Jul 12, 2023

It's my first time developing Android apps but feel free to share info about the failure to run from source and I will let you know if I have any ideas.

I did not do any work to retain Mac compatibility but I think this is what needs to be done: #12 (comment)

@suoko
Copy link

suoko commented Oct 9, 2023

I have a branch that moves more of the processing into native code, I believe it should bring a noticable performance improvement. You can also try 3B models with this version, which should also be much faster. Feel free to try. Note that the new llama.cpp changes model compatibility, models that used to work with Sherpa probably don't work any more until conversion. Pull request: #12 apk available: https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd2

Is this app using both CPU and GPU of smartphones?
Also, is there any chance to make it run with less RAM like 4gb?

@dsd
Copy link

dsd commented Oct 11, 2023

llama.cpp is used as the backend, so you would need to check if llama.cpp supports your GPU, and if it is usable on 4GB RAM with the model you are interested in.

@suoko
Copy link

suoko commented Oct 11, 2023

llama.cpp is used as the backend, so you would need to check if llama.cpp supports your GPU, and if it is usable on 4GB RAM with the model you are interested in.

Does it support any mobile gpu like mali or adreno?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants