Intel Container Experience Kits Setup Scripts provide a simplified mechanism for installing and configuring Kubernetes clusters on Intel Architecture using Ansible.
The software provided here is for reference only and not intended for production environments.
The most recent patch release, v24.01.1, is mainly for supporting the on_prem_aibox
profile.
If any issues are observed when using this release for other profiles, please revert to using v24.01 instead.
NOTE: Instruction provided bellow are prepared for deployment done under root user by default. If you want to do deployment under non-root user then read this file first and then continue with following steps under that non-root user.
-
Decide which configuration profile you want to use and export environmental variable.
NOTE: It will be used only to ease execution of the steps listed below.
-
For Kubernetes Basic Infrastructure deployment:
export PROFILE=basic
-
For Kubernetes Access Edge Infrastructure deployment:
export PROFILE=access
-
For Kubernetes Edge Ready Infrastructure deployment:
export PROFILE=base_video_analytics
-
For Kubernetes Regional Data Center Infrastructure deployment:
export PROFILE=regional_dc
-
For Kubernetes Remote Central Office-Forwarding Configuration deployment:
export PROFILE=remote_fp
-
For Kubernetes Infrastructure On Customer Premises deployment:
export PROFILE=on_prem
-
For Kubernetes Infrastructure On Customer Premises for VSS deployment:
export PROFILE=on_prem_vss
-
For Kubernetes Infrastructure On Customer Premises for AI Box deployment:
export PROFILE=on_prem_aibox
-
For Kubernetes Infrastructure On Customer Premises for SW-Defined Factory deployment:
export PROFILE=on_prem_sw_defined_factory
-
For Kubernetes Build-Your-Own Infrastructure deployment:
export PROFILE=build_your_own
-
-
Install python dependencies using one of the following methods
NOTE: Ensure that at least python3.9 is installed on ansible host
a) Non-invasive virtual environment using pipenv
pip3 install pipenv pipenv install # Then to run and use the environment pipenv shell
b) Non-invasive virtual environment using venv
python3 -m venv venv # Then to activate new virtual environment source venv/bin/activate # Install dependencies in venv pip3 install -r requirements.txt
c) System wide environment (not recommended)
pip3 install -r requirements.txt
-
Install ansible collection dependencies with following command:
ansible-galaxy install -r collections/requirements.yml
-
Copy SSH key to all Kubernetes nodes or VM hosts you are going to use.
ssh-copy-id <user>@<host>
-
Generate example host_vars, group_vars and inventory files for Intel Container Experience Kits profiles.
NOTE: It is highly recommended to read this file before profiles generation.
Architecture and Ethernet Network Adapter type can be auto-discovered:
make auto-examples HOSTS=X.X.X.X,X.X.X.X USERNAME=<user>
or specified manually:
make examples ARCH=<atom,core,icx,**spr**,emr,gnr,ultra> NIC=<fvl,**cvl**>
-
Copy example inventory file to the project root dir.
cp examples/k8s/${PROFILE}/inventory.ini .
or, for VM case:
cp examples/vm/${PROFILE}/inventory.ini .
-
Update inventory file with your environment details.
For VM case: update details relevant for vm_host
NOTE: At this stage you can inspect your target environment by running:
ansible -i inventory.ini -m setup all > all_system_facts.txt
In
all_system_facts.txt
file you will find details about your hardware, operating system and network interfaces, which will help to properly configure Ansible variables in the next steps. -
Copy group_vars and host_vars directories to the project root dir.
cp -r examples/k8s/${PROFILE}/group_vars examples/k8s/${PROFILE}/host_vars .
or, for VM case:
cp -r examples/vm/${PROFILE}/group_vars examples/vm/${PROFILE}/host_vars .
-
Update group and host vars to match your desired configuration. Refer to this section for more details.
NOTE: Please pay special attention to the
http_proxy
,https_proxy
andadditional_no_proxy
vars if you're behind proxy.For VM case:
- update details relevant for vm_host (e.g.: datalane_interfaces, ...)
- update VMs definition in host_vars/host-for-vms-1.yml - use that template for the first vm_host
- update VMs definition in host_vars/host-for-vms-2.yml - use that template for the second and all other vm_hosts
- update/create host_vars for all defined VMs (e.g.: host_vars/vm-ctrl-1.cluster1.local.yml and host_vars/vm-work-1.cluster1.local.yml) In case that vm_cluster_name is not defined or is empty, short host_vars file names should be used for VMs (e.g.: host_vars/vm-ctrl-1.yml and host_vars/vm-work-1.yml) Needed details are at least dataplane_interfaces For more details see VM case configuration guide
-
Mandatory: Apply patch for Kubespray collection.
ansible-playbook -i inventory.ini playbooks/k8s/patch_kubespray.yml
-
Execute
ansible-playbook
.NOTE: It is recommended to use "--flush-cache" (e.g. "ansible-playbook -i --flush-cache inventory.ini playbooks/remote_fp.yml") when executing ansible-playbook in order to avoid unknown issues such as skip of tasks/roles, unable to update previous run inventory details, etc.
ansible-playbook -i inventory.ini playbooks/${PROFILE}.yml
NOTE: For on_prem_aibox case, need to add "-b -K" flags for localhost deployment.
ansible-playbook -i inventory.ini -b -K playbooks/on_prem_aibox.yml
or, for VM case:
ansible-playbook -i inventory.ini playbooks/vm.yml
NOTE: VMs are accessible from ansible host via ssh vm-ctrl-1 or ssh vm-work-1
Refer to the documentation to see details about how to cleanup existing deployment or specific feature.
Refer to the documentation linked below to see configuration details for selected capabilities and deployment profiles.
- SRIOV Network Device Plugin and SRIOV CNI plugin
- MinIO Operator
- Adding and removing worker node(s)
- VM case configuration guide
- VM multinode setup guide
- VM cluster expansion guide
- Non-root deployment guide
-
Required packages on the target servers: Python3.
-
Required packages on the ansible host (where ansible playbooks are run): Python3.8-3.10 and Pip3.
-
Required python packages on the ansible host. See requirements.txt.
-
SSH keys copied to all Kubernetes cluster nodes (
ssh-copy-id <user>@<host>
command can be used for that). -
For VM case SSH keys copied to all VM hosts (
ssh-copy-id <user>@<host>
command can be used for that). -
Internet access on all target servers is mandatory. Proxy is supported.
-
At least 8GB of RAM on the target servers/VMs for minimal number of functions (some Docker image builds are memory-hungry and may cause OOM kills of Docker registry - observed with 4GB of RAM), more if you plan to run heavy workloads such as NFV applications.
-
For the
RHEL
-like OSesSELinux
must be configured prior to the CEK deployment and requiredSELinux
-related packages should be installed.CEK
itself is keeping initialSELinux
state butSELinux
-related packages might be installed duringk8s
cluster deployment as a dependency, forDocker
engine e.g., causing OS boot failure or other inconsistencies ifSELinux
is not configured properly. PreferableSELinux
state ispermissive
.For more details, please, refer to the respective OS documentation.
Contributors, beside basic set of packages, should also install developer packages, using command:
pipenv install --dev
or
pip install -r ci-requirements.txt
Several lint checks are configured for the repository. All of them can be run on local environment using prepared bash scripts or by leveraging pre-commit hooks.
Prerequisite packages:
- developer python packages (ci-requirements.txt/Pipfile)
- shellcheck
- pre-commit python package
Required checks in CI:
- ansible-lint
- bandit
- pylint
- shellcheck
Check can be run by following command:
./scrits/run_<linter_name>.sh
or alternatively:
pre-commit run <linter_name> --all-files