-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make->CMake #10663
base: master
Are you sure you want to change the base?
Make->CMake #10663
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,10 +19,13 @@ Release: 1%{?dist} | |
Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL) | ||
License: MIT | ||
Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz | ||
BuildRequires: coreutils make gcc-c++ git libstdc++-devel | ||
BuildRequires: coreutils cmake make gcc-c++ git libstdc++-devel | ||
Requires: libstdc++ | ||
URL: https://github.com/ggerganov/llama.cpp | ||
|
||
# CMake rpaths kill the build. | ||
%global __brp_check_rpaths %{nil} | ||
|
||
%define debug_package %{nil} | ||
%define source_date_epoch_from_changelog 0 | ||
|
||
|
@@ -34,13 +37,14 @@ Models are not included in this package and must be downloaded separately. | |
%setup -n llama.cpp-master | ||
|
||
%build | ||
make -j | ||
cmake -B build -DBUILD_SHARED_LIBS=0 | ||
cmake --build build --config Release -j$(nproc) | ||
|
||
%install | ||
mkdir -p %{buildroot}%{_bindir}/ | ||
cp -p llama-cli %{buildroot}%{_bindir}/llama-cli | ||
cd build/bin | ||
ls | ||
Comment on lines
+45
to
+46
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this intended? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes it is. Since the build uses a snapshot of master it's really helpful to list the bin contents in build logs. People keep changing build outputs etc and that helps me troubleshoot automation. Valid? |
||
cp -p llama-server %{buildroot}%{_bindir}/llama-server | ||
cp -p llama-simple %{buildroot}%{_bindir}/llama-simple | ||
|
||
mkdir -p %{buildroot}/usr/lib/systemd/system | ||
%{__cat} <<EOF > %{buildroot}/usr/lib/systemd/system/llama.service | ||
|
@@ -69,9 +73,7 @@ rm -rf %{buildroot} | |
rm -rf %{_builddir}/* | ||
|
||
%files | ||
%{_bindir}/llama-cli | ||
%{_bindir}/llama-server | ||
%{_bindir}/llama-simple | ||
/usr/lib/systemd/system/llama.service | ||
%config /etc/sysconfig/llama | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you only want to bundle
llama-server
, add-t llama-server
to build only that.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes a good point. I started to package the libs but it got ugly and I was debating spinning off a separate RPM for libs. Open to suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know how this RPM is intended to be used. If it is meant to be something that distributions can use to bundle a llama.cpp package, it is definitely better to use shared libs and bundle multiple version of the CPU backend as I already pointed before.
Honestly I am not sure why this file is here in the first place. It looks like something that should be maintained by the distro maintainer. Can you explain what is the purpose of this file?