-
Need help understanding how XNNPACK Backend is implemented in this libraryHey Software-Mansion! Thank you for all of your work on providing this library! I wanted to ask how you were able to integrate XNNPACK Backend inside of React-Native Expo. I've gotten my model (Llama 3.2 1B) exported to .pte and tokenizer ready, but unsure of how to integrate the XNNPACK Backend into the project structure. If I could lend some wisdom, that'd be great! Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hi @sskarz , the XNNPACK is utilized out of the box as long as you have the model exported to XNNPACK. There is no further steeps needed from your side. If you want to integrate LLaMa into your react native apps, we highly recommend using our useLLM hook.The initializing section is all you need to get started. And if you use the constants shipped with the library, they will run on XNNPACK. |
Beta Was this translation helpful? Give feedback.
-
Can anyone help me convert the pre-trained model from hugging face to .pte?
|
Beta Was this translation helpful? Give feedback.
Hi @sskarz , the XNNPACK is utilized out of the box as long as you have the model exported to XNNPACK. There is no further steeps needed from your side. If you want to integrate LLaMa into your react native apps, we highly recommend using our useLLM hook.The initializing section is all you need to get started. And if you use the constants shipped with the library, they will run on XNNPACK.
Also, the same rules apply to CoreML, if your model is exported to CoreML it will run with that backend.
If you have any more questions, feel free to ask!:D