-
Notifications
You must be signed in to change notification settings - Fork 56
Description
Description
With a bit of modification of the kubeinit files I am able to get okd deployed on rocky-linux 8.5. You can see the modifications here: https://github.com/Tokix/kubeinit I could make a pull request but there is one thing that is not working as expected and that is the restart of the server. After the restart the routes are vanished and I'm not able to reach the frontend anymore.
To Reproduce
Steps to reproduce the behavior:
- Install a Redhat 8.5 machine setup ssh connection as nyctea as described in the manual
- In my case I had to install python on the hypervisor_host machine addionally before running the playbook successfully
yum install python3
- Clone the changes for Rocky8.5
git clone https://github.com/Tokix/kubeinit.git
- Run the playbook
ansible-playbook \
-v --user root \
-e kubeinit_spec=okd-libvirt-3-1-1 \
-i ./kubeinit/inventory \
./kubeinit/playbook.yml
- Enable the frontend
ssh root@nyctea
chmod +x create-external-ingress.sh
./create-external-ingress.sh
- Setup the DNS Entries for your system
- check if the url is working (it works at this point):
https://console-openshift-console.apps.okdcluster.kubeinit.local/
- reboot the server
init 6
- The URL is not working any longer:
https://console-openshift-console.apps.okdcluster.kubeinit.local/
Expected behavior
The external url of the cluster should be available on restart and the routes should be set.
Screenshots
Working route-configuration before the restart:
Route configuration after restart:
Infrastructure
- Hypervisors OS: Rocky-Linux
- Version 8.5
Deployment command
ansible-playbook \
-v --user root \
-e kubeinit_spec=okd-libvirt-3-1-1 \
-i ./kubeinit/inventory \
./kubeinit/playbook.yml
Inventory file diff
I did no changes to the inventory file
Additional context
As selinux is active on rocky-linux 8.5 my first thought was that some changes could not be persisted so I disabled selinux for testing. However it is still not running after restart.
Checked this old issue https://forums.opensuse.org/showthread.php/530879-openvswitch-loses-configuration-on-reboot but it seems that the booting order of openvswitch and network.service is fine.
Furthermore I ran the steps "Attach our cluster network to the logical router" in the file kubeinit/roles/kubeinit_libvirt/tasks/create_network.yml - This got me back to the correct routing table but I'm still not able to reach the guest-systems via 10.0.0.1-x
Is there any script or service that needs or can be re-run to enable the networking after reboot?
In any case I'm thankful for any hints let me know if you need more information.
Thank you in any case for the great project :)