Adding the Burn llama model behind a feature would avoid having llama.cpp installed on the system.
I do not know if it is feasible because I have not checked the code thoroughly, this is just an idea. Feel free to close this issue if this proposal is deemed out of line.