-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make->CMake #10663
base: master
Are you sure you want to change the base?
Make->CMake #10663
Conversation
Makefile is being deprecated. Stick with static builds using cmake and skip rpaths checks. ggerganov#10514
cd build/bin | ||
ls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this intended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it is. Since the build uses a snapshot of master it's really helpful to list the bin contents in build logs. People keep changing build outputs etc and that helps me troubleshoot automation. Valid?
@@ -34,13 +37,14 @@ Models are not included in this package and must be downloaded separately. | |||
%setup -n llama.cpp-master | |||
|
|||
%build | |||
make -j | |||
cmake -B build -DBUILD_SHARED_LIBS=0 | |||
cmake --build build --config Release -j$(nproc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you only want to bundle llama-server
, add -t llama-server
to build only that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes a good point. I started to package the libs but it got ugly and I was debating spinning off a separate RPM for libs. Open to suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how this RPM is intended to be used. If it is meant to be something that distributions can use to bundle a llama.cpp package, it is definitely better to use shared libs and bundle multiple version of the CPU backend as I already pointed before.
Honestly I am not sure why this file is here in the first place. It looks like something that should be maintained by the distro maintainer. Can you explain what is the purpose of this file?
I am the distro maintainer.
…On Wed, Dec 4, 2024, 13:27 Diego Devesa ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In .devops/llama-cpp.srpm.spec
<#10663 (comment)>
:
> @@ -34,13 +37,14 @@ Models are not included in this package and must be downloaded separately.
%setup -n llama.cpp-master
%build
-make -j
+cmake -B build -DBUILD_SHARED_LIBS=0
+cmake --build build --config Release -j$(nproc)
I don't know how this RPM is intended to be used. If it is meant to be
something that distributions can use to bundle a llama.cpp package, it is
definitely better to use shared libs and bundle multiple version of the CPU
backend as I already pointed before.
Honestly I am not sure why this file is here in the first place. It looks
like something that should be maintained by the distro maintainer. Can you
explain what is the purpose of this file?
—
Reply to this email directly, view it on GitHub
<#10663 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABZP3DFCASBUUBCE2FSGRQL2D5JTLAVCNFSM6AAAAABTA3YDCCVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDINZZG4YTKNJTGQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Then it is your decision what you want to include in your distro package. But I still don't see why this needs to be in the llama.cpp repository. |
Why not? Should the Docker packaging be removed too?
Just a bit of background, I added this almost 2y ago mostly for automation. Automated builds cross build RPMs for all active versions of Fedora including Amazon Linux and RHEL distributions. I think it's a pretty useful archive of builds and compatibility integration testing. Arm and Power builds included.
https://copr.fedorainfracloud.org/coprs/boeroboy/brynzai/monitor/
…On Wed, Dec 4, 2024, 13:47 Diego Devesa ***@***.***> wrote:
Then it is your decision what you want to include in your distro package.
But I still don't see why this needs to be in the llama.cpp repository.
—
Reply to this email directly, view it on GitHub
<#10663 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABZP3DAUCR36533BWXXWA5L2D5L3HAVCNFSM6AAAAABTA3YDCCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMJYGQZDAMBVHE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
We don't distribute RPM packages, and I don't think it should be our responsibility to do so or to decide what goes into them. We do distribute docker images. From your own link, I can see that llama.cpp is the only of these packages that has the .spec in the source repository, in other cases you maintain a .spec file in your own repository. Why should llama.cpp be different? |
I'm part of the project. I'm part of "we" and I'm happy to provide and
maintain the RPM specs as I have for over a year. Why can't I just
contribute this?
…On Wed, Dec 4, 2024, 18:33 Diego Devesa ***@***.***> wrote:
We don't distribute RPM packages, and I don't think it should be our
responsibility to do so or to decide what goes into them. We do distribute
docker images. From your own link, I can see that llama.cpp is the only of
these packages that has the .spec in the source repository, in other cases
you maintain a .spec file in your own repository. Why should llama.cpp be
different?
—
Reply to this email directly, view it on GitHub
<#10663 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABZP3DDKM32FC7VM7E57QKD2D6NOZAVCNFSM6AAAAABTA3YDCCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMJYHAZTQNBTG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Would you be interested in creating a full set of packages for llama.cpp and ggml that follow the Fedora packaging guidelines? This would imply creating different packages for libggml, libllama, the llama.cpp examples, and possibly additional packages for the different backends, as well as developer packages for the libraries. This may require changes to llama.cpp, but I would be willing to work with you on solving any issues that may appear. This could be used to build official packages for RPM-based systems, and could also serve as an example of how to package llama.cpp for other distro maintainers. In the current state, I don't think it should remain in this repository. |
Yes absolutely. I first made these and deployed a COPR before the libs were
refactored out. Also COPR doesn't allow proprietary licenses so won't
accommodate CUDA builds. Happy to modernize these a bit and maybe it can be
pushed into RPMFusion at some point. I am a registered Fedora packaging
maintainer.
Thanks
John
…On Wed, Dec 11, 2024, 09:04 Diego Devesa ***@***.***> wrote:
Would you be interested in creating a full set of packages for llama.cpp
and ggml that follow the Fedora packaging guidelines
<https://docs.fedoraproject.org/en-US/packaging-guidelines/>? This would
imply creating different packages for libggml, libllama, the llama.cpp
examples, and possibly additional packages for the different backends, as
well as developer packages for the libraries. This may require changes to
llama.cpp, but I would be willing to work with you on solving any issues
that may appear. This could be used to build official packages for
RPM-based systems, and could also serve as an example of how to package
llama.cpp for other distro maintainers. In the current state, I don't think
it should remain in this repository.
—
Reply to this email directly, view it on GitHub
<#10663 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABZP3DAH46UZR5QGOXEQHRD2E6FQ5AVCNFSM6AAAAABTA3YDCCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZTGM4DANZQGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
That would be great. It's ok if the CUDA or other backends cannot be included in every distribution, but other backends like Vulkan should be ok and would be useful to many people. Over time it would be good to have packages for all the backends, but that can be done progressively. Let me know if you need any help to get this started. Some tips:
|
OK I agree. Starting work on this now. Does that sound good? Then in theory llama-server can be a universal package as long as the package versions match. I've been using YYYYMMDD for package version as there doesn't seem to be standard versioning in releases. |
Actually can anybody advise me the best way to inline or build with the |
Sorry if I missed other context.
btw We could also add native support for CPack if there is interest. |
Thanks for that - I knew it had to be something simple but CMake man pages and tab autocompletion weren't showing me any love. I should be able to insert the build env vars for that to be universal. I will need to skip the install as Happy to coordinate with CPack. At HashiCorp we settled on FPM/Effing Package Manager too. I kept things simple with rpm specs which can be easily uploaded to Fedora build automation and integration tested across the supported Fedora/Enterprise Linux platforms. |
You could write custom
Ah, the FPM is great! Used it in the past as well. |
You could but then cmake becomes a prerequisite for simply installing a pre-built binary which isn't ideal. I'm pretty sure packaging guidelines recommend against dev tool prereqs for pre-built binaries to keep things simple. The build containers in COPR are pretty flexible - should be user and $HOME neutral. I'm traveling today but will test out your CMake suggestion against COPR build environments in a day or two. |
That's not the case. That's what I have been trying to tell you, if you use |
Oh perfect. I didn't notice that was added. Thanks
…On Sat, Dec 14, 2024, 09:58 Diego Devesa ***@***.***> wrote:
It's a shame the GGML backend is built statically with env variables
instead of having different library names. This way each lib RPM will need
a conflictswith declaration as people can't install multiple versions of
a lib with the same name.
That's not the case. That's what I have been trying to tell you, if you
use GGML_BACKEND_DL to build a backend, it is just a dynamic library that
can be loaded at runtime. Then you only need a package for the base ggml
library, and additional packages for each backend that are installed
together with the ggml library package.
—
Reply to this email directly, view it on GitHub
<#10663 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABZP3DB7FU7D3GQ3HZG2WCT2FOGC5AVCNFSM6AAAAABTA3YDCCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNBSGY2TONRZGI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Hello, I'm Mohammadreza, one of the Fedora maintainers for the llama.cpp and whisper.cpp packages. A while back, I started working on a new spec file to update llama.cpp to its latest version in fedora. However, due to significant changes in the project, this required a complete rewrite to address the issues effectively. During this process, I managed to segment and resolve several packaging challenges in the llama.cpp package. I saw your offer to collaborate on creating a full set of packages for llama.cpp and GGML following Fedora's packaging guidelines. Would it be possible to extend this offer to me as well? I’d love to work with you to improve the integration of llama.cpp into the Fedora mainline repository. While we don’t support the CUDA backend Hello, I'm Mohammadreza, one of the Fedora maintainers for the llama.cpp and whisper.cpp packages. A while back, I started working on a new spec file to update llama.cpp to its latest version. However, due to significant changes in the project, this required a complete rewrite to address the issues effectively. During this process, I managed to segment and resolve several packaging challenges in the llama.cpp package. I saw your offer to collaborate on creating a full set of packages for llama.cpp and GGML following Fedora's packaging guidelines. Would it be possible to extend this offer to me as well? I’d love to work with you to improve the integration of llama.cpp into the Fedora mainline repository. While we don’t support the CUDA backend in Fedora mainline, we do have full ROCm support, partial Vulkan support, and other smaller acceleration frameworks. |
Yes, of course. I wasn't aware of these Fedora packages in the mainline repository, I would be happy to work with you to improve them. I think it would be good for developers and users if we could have a set of packages for ggml (and the backends) and llama.cpp that other applications can use. This will require changes to ggml and llama.cpp to achieve this, but I think it is worth the effort. @max-krasnyansky suggestion of using CPack looks very interesting to me. If this works well, it could be very useful to generate all kinds of packages of llama.cpp. I don't have any experience working with CPack, so I would appreciate any insight about it, especially if there are any known issues that may prevent using it to create packages for Fedora or other distributions. |
The mainlined llama-cpp package is news to me too. It only seems to have There also seems to be an outstanding PR for the old refactor from
|
|
Its up to package owner to add new members. If you are interestedin working on the llama-cpp package or any other AI related packages please join the ai/ml matrix room there you can talk to rest of the members |
Makefile is being deprecated. Stick with static builds using cmake and skip rpaths checks.
#10514
Make sure to read the contributing guidelines before submitting a PR