Does LLamaSharp expose llama.cpp's finetuning functionality? #706
Replies: 2 comments 1 reply
-
Finetune has not been supported by LLamaSharp yet. Could you please tell us something about how would you like to integrate the finetune functionality with your app? For example, handling input data with C#, finetune in the desktop app, or anything else? |
Beta Was this translation helpful? Give feedback.
-
Finetune is in our roadmap, but at this time we're in short of hand to do it. May I ask in which scenario your training pipeline will work? There're lots of python libraries providing functionalities to set up LLM training within one step. So it seems that you want to put the training and inference together in an app? I'll appreciate for your answer because that helps us to see how important this feature is and how to introduce it in the future. :) |
Beta Was this translation helpful? Give feedback.
-
I can see that I can apply LoRA adapters to a model via LLamaSharp, but just curious about whether I can generate those adapters via this library (and if not, any suggestions on achieving this within my existing .net project?)
As shown in llama.cpp's example here: https://github.com/ggerganov/llama.cpp/tree/master/examples/finetune
Beta Was this translation helpful? Give feedback.
All reactions