-
Notifications
You must be signed in to change notification settings - Fork 4
Description
Describe the bug
To facilitate the synchronization of Kubernetes objects from the KCP namespace to its corresponding synctarget (in turn in a namespace in the pcluster), I have created placement policies mapping each sync target/location to a KCP namespace.
However, I have encountered an inconsistency in KCP's behavior when creating the API binding on a Macbook (Darwin ARM64, M1 Pro) compared to a Linux VM (AMD64).
On the Linux VM, creating an API binding for the Kubernetes API export results in the termination of the namespace in the provisioned cluster that corresponds to the KCP namespace. Yet, this does not occur on the Macbook (Darwin ARM64) - creating an API binding for Kubernetes resources does not affect the namespace in the provisioned cluster.
Here is the sequence of steps I follow to establish KCP wiring:
- 4-ws-sync.sh: Create a workspace and deploy a sync target in each target cluster.
- 5-labelsyncer.sh: Label the sync targets.
- 6-ns-loc-pp.sh: Create the KCP namespace, location, and placement policy.
- 7a-APIBINDING.sh: Create the API binding.
env_variables.sh: It has the environment variables for cluster name, workspace, etc.
The inconsistency appears with KCP's behavior during step 4 on the Linux VM (ARM64).
If anyone has encountered a similar issue or has insights to share, your input would be greatly appreciated.
Steps To Reproduce
- Simple steps to reproduce the issue:
- Deploy a kind cluster on an amd64 linux machine
- Run the KCP v0.11 on the amd64 Linux machine
- Create a KCP workspace
- kubectl workspace create $WORKSPACE_NAME --enter
- Deploy the syncer in the kind cluster
- Label the syncer
- Create a KCP namespace,
- Create a location with instanceSelector pointing to the label of the syncer as defined in the 6* script
- Create a placement policy to map the location with the KCP namespace
- After step 8, you can see the namespace created in the pcluster
- Create an API binding in the current workspace e.g. 7a-* script
- Check the namespace in the pcluster (on amd64: it terminates, on Darwin Amd64: no impact)
Expected Behaviour
With KCP v0.11.0 for AMD64 arch. : The namespace (downstream object) in the pcluster that is getting terminated is corresponding to the KCP namespace (upstream object).
With KCP v0.11.0 for Darwin ARM64 arch. : The namespace (downstream object) in the pcluster is not terminated.
Additional Context
The namespace state after creating the API binding
irl@hub:~/pankaj/octopus$ k get ns edge-1 -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
kcp.io/cluster: 2rdefjxpbx1yb9rx
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"name":"edge-1"},"name":"edge-1"}}
scheduling.kcp.io/placement: ""
creationTimestamp: "2023-06-08T15:10:58Z"
labels:
kubernetes.io/metadata.name: edge-1
name: edge-1
name: edge-1
resourceVersion: "1972"
uid: 92f0dd0b-b1fa-45ef-8d20-5bafd1d11e28
spec:
finalizers:
- kubernetes
status:
conditions: - lastTransitionTime: "2023-06-08T15:38:03Z"
message: No available sync targets
reason: Unschedulable
status: "False"
type: NamespaceScheduled
phase: Active
The namespace state before creating the API binding
irl@hub:~/pankaj/octopus$ k get ns edge-1 -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
kcp.io/cluster: 2rdefjxpbx1yb9rx
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"name":"edge-1"},"name":"edge-1"}}
scheduling.kcp.io/placement: ""
creationTimestamp: "2023-06-08T15:10:58Z"
labels:
kubernetes.io/metadata.name: edge-1
name: edge-1
state.workload.kcp.io/6YKBsgxBSSW1XYFm3wkHf97QZEkLyTHV4Pqguq: Sync
name: edge-1
resourceVersion: "2033"
uid: 92f0dd0b-b1fa-45ef-8d20-5bafd1d11e28
spec:
finalizers:
- kubernetes
status:
conditions: - lastTransitionTime: "2023-06-08T15:40:02Z"
status: "True"
type: NamespaceScheduled
phase: Active
pankajthorat@Pankajs-MacBook-Pro octopus % cat env_variables.sh
#!/bin/bash
export HUB_CLUSTER_NAME="hub-operator-system"
export CORE_CLUSTER_NAME="core-1"
export EDGE1_CLUSTER_NAME="edge-1"
export EDGE2_CLUSTER_NAME="edge-2"
#export CLUSTER_NAMES=("$HUB_CLUSTER_NAME" "$CORE_CLUSTER_NAME")
export CLUSTER_NAMES=("$HUB_CLUSTER_NAME" "$CORE_CLUSTER_NAME" "$EDGE1_CLUSTER_NAME" "$EDGE2_CLUSTER_NAME")
export WORKSPACE_NAME="octopus"
pankajthorat@Pankajs-MacBook-Pro octopus % cat 4-ws-sync.sh
#!/bin/bash
source env_variables.sh
export KUBECONFIG=.kcp/admin.kubeconfig
kubectl workspace create $WORKSPACE_NAME --enter
#kubectl workspace create-context
for cluster_name in "${CLUSTER_NAMES[@]}"; do
if [[ $cluster_name =~ "hub" ]]; then
kubectl kcp workload sync "$cluster_name" --syncer-image ghcr.io/kcp-dev/kcp/syncer:v0.11.0 --resources=atomgraphlets.edge.operator.com -o "$cluster_name".yaml
fi
if [[ ! $cluster_name =~ "hub" ]]; then
kubectl kcp workload sync "$cluster_name" --syncer-image ghcr.io/kcp-dev/kcp/syncer:v0.11.0 --resources=atomgraphlets.edge.operator.com -o "$cluster_name".yaml
#kubectl kcp workload sync "$cluster_name" --syncer-image ghcr.io/kcp-dev/kcp/syncer:v0.10.0 --resources=serviceaccounts --resources=rolebindings.rbac.authorization.k8s.io --resources=roles.rbac.authorization.k8s.io --resources=clusterrolebindings.rbac.authorization.k8s.io --resources=clusterroles.rbac.authorization.k8s.io -o "$cluster_name".yaml
fi
done
for cluster_name in "${CLUSTER_NAMES[@]}"; do
KUBECONFIG=/.kube/config kubectl config use-context kind-"$cluster_name"/.kube/config kubectl apply -f "$cluster_name".yaml
KUBECONFIG=
done
echo "Sleeping for 30 seconds..."
sleep 30
pankajthorat@Pankajs-MacBook-Pro octopus % cat 5-labelsyncer.sh
#!/bin/bash
source env_variables.sh
export KUBECONFIG=.kcp/admin.kubeconfig
for cluster_name in "${CLUSTER_NAMES[@]}"; do
kubectl label synctarget/"$cluster_name" name=st-"$cluster_name" --overwrite
done
pankajthorat@Pankajs-MacBook-Pro octopus % cat 6-ns-loc-pp.sh
#!/bin/bash
source env_variables.sh
export KUBECONFIG=.kcp/admin.kubeconfig
create namespaces
for cluster_name in "${CLUSTER_NAMES[@]}"; do
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: ${cluster_name}
labels:
name: ${cluster_name}
EOF
done
create new locations
for cluster_name in "${CLUSTER_NAMES[@]}"; do
kubectl apply -f - <<EOF
apiVersion: scheduling.kcp.io/v1alpha1
kind: Location
metadata:
name: location-$cluster_name
labels:
name: location-$cluster_name
spec:
instanceSelector:
matchLabels:
name: st-$cluster_name
resource:
group: workload.kcp.io
resource: synctargets
version: v1alpha1
EOF
done
Delete the default location
kubectl delete location default
create placement policies
for cluster_name in "${CLUSTER_NAMES[@]}"; do
kubectl apply -f - <<EOF
apiVersion: scheduling.kcp.io/v1alpha1
kind: Placement
metadata:
name: pp-$cluster_name
spec:
locationResource:
group: workload.kcp.io
resource: synctargets
version: v1alpha1
locationSelectors:
- matchLabels:
name: location-$cluster_name
namespaceSelector:
matchLabels:
name: $cluster_name
locationWorkspace: root:$WORKSPACE_NAME
#locationWorkspace: root
EOF
done
#kubectl kcp bind compute root
#kubectl kcp bind compute root:$WORKSPACE_NAME --apiexports=root:$WORKSPACE_NAME:kubernetes
#kubectl delete placements placement-1cgav5jo
pankajthorat@Pankajs-MacBook-Pro octopus % cat 7a-APIBINDING.sh
#!/bin/bash
#kubectl ws .
#kubectl kcp bind compute root:octopus --apiexports=root:octopus:kubernetes
export KUBECONFIG=.kcp/admin.kubeconfig
kubectl ws .
kubectl apply -f - <<EOF
apiVersion: apis.kcp.io/v1alpha1
kind: APIBinding
metadata:
name: bind-kube
spec:
reference:
export:
path: "root:compute"
name: kubernetes
EOF
Metadata
Metadata
Assignees
Labels
Type
Projects
Status