Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make->CMake #10663

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 8 additions & 6 deletions .devops/llama-cpp.srpm.spec
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,13 @@ Release: 1%{?dist}
Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL)
License: MIT
Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz
BuildRequires: coreutils make gcc-c++ git libstdc++-devel
BuildRequires: coreutils cmake make gcc-c++ git libstdc++-devel
Requires: libstdc++
URL: https://github.com/ggerganov/llama.cpp

# CMake rpaths kill the build.
%global __brp_check_rpaths %{nil}

%define debug_package %{nil}
%define source_date_epoch_from_changelog 0

Expand All @@ -34,13 +37,14 @@ Models are not included in this package and must be downloaded separately.
%setup -n llama.cpp-master

%build
make -j
cmake -B build -DBUILD_SHARED_LIBS=0
cmake --build build --config Release -j$(nproc)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you only want to bundle llama-server, add -t llama-server to build only that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes a good point. I started to package the libs but it got ugly and I was debating spinning off a separate RPM for libs. Open to suggestions.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know how this RPM is intended to be used. If it is meant to be something that distributions can use to bundle a llama.cpp package, it is definitely better to use shared libs and bundle multiple version of the CPU backend as I already pointed before.
Honestly I am not sure why this file is here in the first place. It looks like something that should be maintained by the distro maintainer. Can you explain what is the purpose of this file?


%install
mkdir -p %{buildroot}%{_bindir}/
cp -p llama-cli %{buildroot}%{_bindir}/llama-cli
cd build/bin
ls
Comment on lines +45 to +46
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this intended?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is. Since the build uses a snapshot of master it's really helpful to list the bin contents in build logs. People keep changing build outputs etc and that helps me troubleshoot automation. Valid?

cp -p llama-server %{buildroot}%{_bindir}/llama-server
cp -p llama-simple %{buildroot}%{_bindir}/llama-simple

mkdir -p %{buildroot}/usr/lib/systemd/system
%{__cat} <<EOF > %{buildroot}/usr/lib/systemd/system/llama.service
Expand Down Expand Up @@ -69,9 +73,7 @@ rm -rf %{buildroot}
rm -rf %{_builddir}/*

%files
%{_bindir}/llama-cli
%{_bindir}/llama-server
%{_bindir}/llama-simple
/usr/lib/systemd/system/llama.service
%config /etc/sysconfig/llama

Expand Down
Loading