Skip to content

Latest commit

 

History

History
187 lines (119 loc) · 8.83 KB

Packaging.md

File metadata and controls

187 lines (119 loc) · 8.83 KB

Packaging and Release

Package generation by CPack

Introduction to CPack

cpack can generate native binary package for ubuntu and fedora, even MacOS and Windows. Windows NSIS packaging is one package generator for cpack, but it is yet tested. Installation by conda for windows would be a better choice for C++ and Python API users.

Inside the build folder, run make package to generate *.deb or *.rpm.

cpack's configuration (package meta info collection) is located at the bottom of the project toplevel CMakeLists.txt. For each target to install, call the corresponding cmake command install(target ...).

packaged files organization

The generated packages is a all-in-one package include all (lib, bin, python module, headers).

For example, unzip the deb package file, all installed files are organized in subfolders 'bin/', 'lib/', 'include/'

  • Binaries, executables
  • Headers, by PUBLIC_HEADER DESTINATION include/Geom
  • Library, shared libraries like libpppGeom.so
  • Python interface module

If system-wide python3 (installed from official repository) is used, which can be confirmed by which python3, this deb/rpm will have the correct python interface module installed.

Note: this fold structure does not work on windows

Headers are not a compiling target, but can be installed eather as files or attached to some compiling target as PUBLIC_HEADER properties. https://gitlab.kitware.com/cmake/community/wikis/doc/cpack/Component-Install-With-CPack

set_target_properties(MyGeom PROPERTIES 
    PUBLIC_HEADER "${GEOM_HEADERS}")  # must use quote, otherwise only the first one

install(TARGETS MyGeom
  RUNTIME DESTINATION bin
  LIBRARY DESTINATION lib
  PUBLIC_HEADER DESTINATION include/ppp/Geom
  COMPONENT libraries
)
# can be renamed  `RENAME`  `PERMISSIONS`

test data

it is not decided where to install the test data, therefore, unit test can only run in build folder.

test data, test files,

Unit test can be turned off by CMake option -DPPP_USE_TEST=OFF

Python interface module

python site-package detection

python3 -c "import sys; print(sys.path)"

CMake can install all python stuff like utility scripts and extension moduleppp.so/ppp.pyd,

    execute_process(
        COMMAND "${PYTHON_EXECUTABLE}" -c "if True:
          from distutils import sysconfig as sc
          print(sc.get_python_lib(prefix='', plat_specific=True))"
        OUTPUT_VARIABLE PYTHON_SITE
        OUTPUT_STRIP_TRAILING_WHITESPACE)

CPack should generate an all-in-one package (deb/rpm) including this python module. This python module installation route needs more testing for different platform.

conda is another choice to support more platforms, by make install ppp.so should be installed to conda python' site, while it is not clear on Windows conda.

CMake's FindPython has Python_SITELIB, Python_SITEARCH

Python_SITELIB is Third-party platform independent installation directory.

Information returned by distutils.sysconfig.get_python_lib(plat_specific=False,standard_lib=False).

Python_SITEARCH

Python_SITELIB is Third-party platform dependent installation directory.

Information returned by distutils.sysconfig.get_python_lib(plat_specific=True,standard_lib=False).

Generate python interface module ppp

The python interface module ppp gives API similar with C++, e.g. class PipelineController, via pybind11 wrapping C++ API. It is a not a pure Python module, but C-extension module must be compiled with a specific python version. It means different installer is needed for python 3.6 and python3.7, even if they link to the same c++ shared libraries (libpppGeom.so).

Although cmake has generate python extension "ppp.*.so" on Linux, it targets only one python version detected by cmake. It should be possible to add cmake option to choose python versions, if multiple versions are installed. To specify the python version during cmake configuration.

-DPython_EXECUTABLE=full_path_to_python

Extra python interface module can be generated by setup.py for another version of Python, after cmake has successfully compled the all C++ targets such as pppGeom shared library.

It is driven by full_path_to_python setup.py bdist_wheel (implemented yet tested).

Currently, there are 2 hardcoded paths in <../python/setup.py> It is assumed, the binary building dir is the build folder in the repository source dir, (see how the variable ppp_lib_dir assigned in setup.py) just as the compiling instruction shown in <CompileOnLinux.md> If a different building dir is used, manually change that variable. Another assumption is occt_include_dir = "/usr/include/opencascade", change it if necessary.

if ppp module is not built, geomPipeline.py still works

when ppp module is not built/installed, geomPipeline.py will start an external process to run the command geomPipeline your_config.json.

geomPipeline.py is pure python script, so geomPipeline your_config.json works for any version of CPython; the drawback is that c++ class's python wrapping is not available.

Release

It is expected some binary packages will be generated by CI system on the hosting web. Tutorial for manuall upload binary as release: https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository

Procedure to make a tagged release

Github workflow CI should generate new binary packages on each git push, that is dev build with version 0.3-dev

git tag v0.3.0 or branch if necessary in the stable release stage. cd build && cmake -DPPP_VERSION_NAME="0.3.0" -DCMAKE_BUILD_TYPE=Release .. && make -j4 && make package run the unit test, then manually upload the packages to github Release.

Docker will be used to generate binary package for more platforms

if version increases, edit

  • PROJECT_VERSION: "0.3-dev" in github workflow CI yml files
  • PACKAGE_VERSION in project CMakeLists.txt
  • download link in Readme.md

Generate native packages by CPack in github workflows

It is important to have a meaningful and precise package file name, which is fixed in the project's CMakeLists.txt, The package file has the name pattern: parallel-preprocessor-<this_software_version>-dev_<OS name>-<OS version>.<package_suffix>. so later the generated package file can be located and uploaded to release endpoints by CI workflow.

If your OS is not supported, you need to compile it by yourself, there is documentation for installation dependency and build for all major platforms.

update source zip : https://github.com/qingfengxia/parallel-preprocessor/archive/dev.zip

Upload daily-build packages to Github Release

Official actions actions/upload-release-asset@v1 does not work for me, so I use "svenstaro/upload-release-action@v2" First of all, I create release tag, then upload binary package manually (creating file endpoint url by this uploading), to reserve a asset url, then run this action to update the package on each push.

    # those release asset filename (is created manually before running this action)
    - name: Upload binary package to release
      uses: svenstaro/upload-release-action@v2
      with:
        repo_token: ${{ secrets.GITHUB_TOKEN }}
        file: ${{ github.workspace }}/build/${{ env.PROJECT_NAME }}-${{ env.PROJECT_VERSION }}_${{ matrix.os }}.deb
        asset_name: ${{ env.PROJECT_NAME }}-${{ env.PROJECT_VERSION }}_${{ matrix.os }}.deb
        tag: dev
        overwrite: true
        prerelease: true
      if: always()

Package upload privilege

Error: Resource not accessible by integration

#7

Only users in the admin group may have the write access to the repo ( e.g. upload packages to release points). Users sent PR will fail, leading to CI check failure.

fix issue#7 by disable uploading package except main branch if: github.ref == 'refs/heads/main' # only upload package on main branch, not PR

Meanwhile developer working with a forked repo should have the write access to the release endpoint in the fork repo.

Binary conda package (under-testing) for all platforms

Conda package building also relies on install(target ...) cmake command.

Compiling parallel-preprocessor on windows with dependencies installed by conda-forge, has been confirmed working. Upload to conda-forge is yet configured.

The conda recipe is based on cmake build system. This will install all components (c++ headers, shared-libraries, python interface) to conda's installation directory.

Container like docker image

Compile and build in docker has been tested, Dockerfile for fedora, centos, ubuntu LTS are available in the sourse repository.

Deploy on HPC

Centos7 has been tested to compile. Deployment on HPC without admin privilege is under-investigation. UKAEA HPC has centos 7.x OS. Singularity image can be a choice for user installation, to maximize the multi-threading or MPI performance.