@@ -172,22 +172,43 @@ goes wrong.
172
172
173
173
### Workflow Description
174
174
175
- > Explain how the user will use the feature. Be detailed and explicit. Describe
176
- > all of the actors, their roles, and the APIs or interfaces involved. Define a
177
- > starting state and then list the steps that the user would need to go through to
178
- > trigger the feature described in the enhancement. Optionally add a
179
- > [ mermaid] ( https://github.com/mermaid-js/mermaid#readme ) sequence diagram.
180
- >
181
- > Use sub-sections to explain variations, such as for error handling,
182
- > failure recovery, or alternative outcomes.
183
-
184
- ** cluster-admin** is a human user responsible for managing a cluster.
185
-
186
- 1 . Start with a 4.18 cluster with conflicting CRDs.
187
- 2 . Upgrade to 4.19.
188
- 3 . Check clusteroperators, see a conflict.
189
- 4 . Run some ` oc ` command.
190
- 5 . Check the ingress clusteroperator again. Now everything should be dandy.
175
+ The workflow in this case is an upgrade process. From the _ user_ perspective the
176
+ CRDs will be fully managed via the platform from here on out, so they only need
177
+ to interface with the upgrade workflow on the condition that their cluster had
178
+ previously installed Gateway API CRDs. The workflow consists of the pre-upgrade
179
+ checks and the post-upgrade checks.
180
+
181
+ ### Pre-upgrade
182
+
183
+ 1 . In the CIO a pre-upgrade check verifies CRD presence
184
+ * IF the CRDs are present
185
+ * an admingate is created requiring acknowledgement of CRD succession
186
+ * UNTIL the schema matches an exact versions we set ` Upgradable=false `
187
+ 2 . Once CRDs are not present OR an exact match we set ` Upgradable=true `
188
+
189
+ > ** Note** : A ** cluster-admin** is required for these steps.
190
+
191
+ > ** Note** : The logic for these lives in in the previous release (4.18), but
192
+ > does not need to be carried forward to future releases as other logic exists
193
+ > there to handle Gateway API CRD state (see below).
194
+
195
+ ### Post-upgrade
196
+
197
+ > ** Note** : The logic for these lives in in the new release (4.19) and onward.
198
+
199
+ 1 . The CIO is hereafter deployed alongside its CRD protection VAP
200
+ 2 . The CIO constantly checks for the presence of the CRDs
201
+ * IF the CRDs are present
202
+ * UNTIL the CRD schema matches what is expected, the CIO upgrades them
203
+ * IF the upgrade fails ` Degraded ` status is set
204
+ * ELSE the CRDs are deployed by the CIO
205
+
206
+ > ** Note** : If we reach ` Degraded ` its expected that some tampering has
207
+ > occurred (e.g. a cluster-admin has for some reason destroyed our VAP and
208
+ > manually changed the CRDs). For the initial release we will simply require
209
+ > manual intervention (support) to fix this as we can't guess too well at the
210
+ > original intent behind the change. In future iterations we may consider more
211
+ > solutions if this becomes a common problem for some reason.
191
212
192
213
### API Extensions
193
214
0 commit comments