Skip to content

Commit

Permalink
Merge pull request #686 from red-hat-storage/sync_us--master
Browse files Browse the repository at this point in the history
Syncing latest changes from upstream master for rook
  • Loading branch information
travisn authored Jul 26, 2024
2 parents c4f7e0c + 39cc31d commit 1b76ad1
Show file tree
Hide file tree
Showing 2 changed files with 68 additions and 29 deletions.
73 changes: 54 additions & 19 deletions Documentation/CRDs/Cluster/external-cluster/external-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,25 +121,6 @@ The storageclass is used to create a volume in the pool matching the topology wh

For more details, see the [Topology-Based Provisioning](topology-for-external-mode.md)

### Upgrade Example

1. If consumer cluster doesn't have restricted caps, this will upgrade all the default csi-users (non-restricted):

```console
python3 create-external-cluster-resources.py --upgrade
```

2. If the consumer cluster has restricted caps:
Restricted users created using `--restricted-auth-permission` flag need to pass mandatory flags: '`--rbd-data-pool-name`(if it is a rbd user), `--k8s-cluster-name` and `--run-as-user`' flags while upgrading, in case of cephfs users if you have passed `--cephfs-filesystem-name` flag while creating csi-users then while upgrading it will be mandatory too. In this example the user would be `client.csi-rbd-node-rookstorage-replicapool` (following the pattern `csi-user-clusterName-poolName`)

```console
python3 create-external-cluster-resources.py --upgrade --rbd-data-pool-name replicapool --k8s-cluster-name rookstorage --run-as-user client.csi-rbd-node-rookstorage-replicapool
```

!!! note
An existing non-restricted user cannot be converted to a restricted user by upgrading.
The upgrade flag should only be used to append new permissions to users. It shouldn't be used for changing a csi user already applied permissions. For example, you shouldn't change the pool(s) a user has access to.

### Admin privileges

If in case the cluster needs the admin keyring to configure, update the admin key `rook-ceph-mon` secret with client.admin keyring
Expand Down Expand Up @@ -305,3 +286,57 @@ you can export the settings from this cluster with the following steps.

!!! important
For other clusters to connect to storage in this cluster, Rook must be configured with a networking configuration that is accessible from other clusters. Most commonly this is done by enabling host networking in the CephCluster CR so the Ceph daemons will be addressable by their host IPs.

## Upgrades

Upgrading the cluster would be different for restricted caps and non-restricted caps,

1. If consumer cluster doesn't have restricted caps, this will upgrade all the default CSI users (non-restricted)

```console
python3 create-external-cluster-resources.py --upgrade
```

2. If the consumer cluster has restricted caps

Restricted users created using `--restricted-auth-permission` flag need to pass mandatory flags: '`--rbd-data-pool-name`(if it is a rbd user), `--k8s-cluster-name` and `--run-as-user`' flags while upgrading, in case of cephfs users if you have passed `--cephfs-filesystem-name` flag while creating CSI users then while upgrading it will be mandatory too. In this example the user would be `client.csi-rbd-node-rookstorage-replicapool` (following the pattern `csi-user-clusterName-poolName`)

```console
python3 create-external-cluster-resources.py --upgrade --rbd-data-pool-name replicapool --k8s-cluster-name rookstorage --run-as-user client.csi-rbd-node-rookstorage-replicapool
```

!!! note
1) An existing non-restricted user cannot be converted to a restricted user by upgrading.
2) The upgrade flag should only be used to append new permissions to users. It shouldn't be used for changing a CSI user already applied permissions. For example, be careful not to change pools(s) that a user has access to.

### Upgrade cluster to utilize new feature

Some Rook upgrades may require re-running the import steps, or may introduce new external cluster features that can be most easily enabled by re-running the import steps.

To re-run the import steps with new options, the python script should be re-run using the same configuration options that were used for past invocations, plus the configurations that are being added or modified.

Starting with Rook v1.15, the script stores the configuration in the external-cluster-user-command configmap for easy future reference.

* arg: Exact arguments that were used for for processing the script. Argument that are decided using the Priority: command-line-args > config.ini file values > default values.

#### Example `external-cluster-user-command` ConfigMap:

1. Get the last-applied config, if its available

```console
$ kubectl get configmap -namespace rook-ceph external-cluster-user-command --output jsonpath='{.data.args}'
```

2. Copy the output to config.ini

3. Make any desired modifications and additions to `config.ini``

4. Run the python script again using the [config file](#config-file)

5. [Copy the bash output](#2-copy-the-bash-output)

6. Run the steps under [import-the-source-data](#import-the-source-data)

!!! warning
If the last-applied config is unavailable, run the current version of the script again using previously-applied config and CLI flags.
Failure to reuse the same configuration options when re-invoking the python script can result in unexpected changes when re-running the import script.
24 changes: 14 additions & 10 deletions tests/integration/ceph_upgrade_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -105,11 +105,12 @@ func (s *UpgradeSuite) TestUpgradeHelm() {
}

func (s *UpgradeSuite) testUpgrade(useHelm bool, initialCephVersion v1.CephVersionSpec) {
s.baseSetup(useHelm, installer.Version1_14, initialCephVersion)
baseRookImage := installer.Version1_14
s.baseSetup(useHelm, baseRookImage, initialCephVersion)

objectUserID := "upgraded-user"
preFilename := "pre-upgrade-file"
numOSDs, rbdFilesToRead, cephfsFilesToRead := s.deployClusterforUpgrade(objectUserID, preFilename)
numOSDs, rbdFilesToRead, cephfsFilesToRead := s.deployClusterforUpgrade(baseRookImage, objectUserID, preFilename)

clusterInfo := client.AdminTestClusterInfo(s.namespace)
requireBlockImagesRemoved := false
Expand Down Expand Up @@ -183,12 +184,13 @@ func (s *UpgradeSuite) testUpgrade(useHelm bool, initialCephVersion v1.CephVersi
}

func (s *UpgradeSuite) TestUpgradeCephToQuincyDevel() {
s.baseSetup(false, installer.LocalBuildTag, installer.QuincyVersion)
baseRookImage := installer.LocalBuildTag
s.baseSetup(false, baseRookImage, installer.QuincyVersion)

objectUserID := "upgraded-user"
preFilename := "pre-upgrade-file"
s.settings.CephVersion = installer.QuincyVersion
numOSDs, rbdFilesToRead, cephfsFilesToRead := s.deployClusterforUpgrade(objectUserID, preFilename)
numOSDs, rbdFilesToRead, cephfsFilesToRead := s.deployClusterforUpgrade(baseRookImage, objectUserID, preFilename)
clusterInfo := client.AdminTestClusterInfo(s.namespace)
requireBlockImagesRemoved := false
defer func() {
Expand Down Expand Up @@ -216,12 +218,13 @@ func (s *UpgradeSuite) TestUpgradeCephToQuincyDevel() {
}

func (s *UpgradeSuite) TestUpgradeCephToReefDevel() {
s.baseSetup(false, installer.LocalBuildTag, installer.ReefVersion)
baseRookImage := installer.LocalBuildTag
s.baseSetup(false, baseRookImage, installer.ReefVersion)

objectUserID := "upgraded-user"
preFilename := "pre-upgrade-file"
s.settings.CephVersion = installer.ReefVersion
numOSDs, rbdFilesToRead, cephfsFilesToRead := s.deployClusterforUpgrade(objectUserID, preFilename)
numOSDs, rbdFilesToRead, cephfsFilesToRead := s.deployClusterforUpgrade(baseRookImage, objectUserID, preFilename)
clusterInfo := client.AdminTestClusterInfo(s.namespace)
requireBlockImagesRemoved := false
defer func() {
Expand Down Expand Up @@ -249,12 +252,13 @@ func (s *UpgradeSuite) TestUpgradeCephToReefDevel() {
}

func (s *UpgradeSuite) TestUpgradeCephToSquidDevel() {
s.baseSetup(false, installer.LocalBuildTag, installer.SquidVersion)
baseRookImage := installer.LocalBuildTag
s.baseSetup(false, baseRookImage, installer.SquidVersion)

objectUserID := "upgraded-user"
preFilename := "pre-upgrade-file"
s.settings.CephVersion = installer.SquidVersion
numOSDs, rbdFilesToRead, cephfsFilesToRead := s.deployClusterforUpgrade(objectUserID, preFilename)
numOSDs, rbdFilesToRead, cephfsFilesToRead := s.deployClusterforUpgrade(baseRookImage, objectUserID, preFilename)
clusterInfo := client.AdminTestClusterInfo(s.namespace)
requireBlockImagesRemoved := false
defer func() {
Expand All @@ -281,7 +285,7 @@ func (s *UpgradeSuite) TestUpgradeCephToSquidDevel() {
checkCephObjectUser(&s.Suite, s.helper, s.k8sh, s.namespace, installer.ObjectStoreName, objectUserID, true, false)
}

func (s *UpgradeSuite) deployClusterforUpgrade(objectUserID, preFilename string) (int, []string, []string) {
func (s *UpgradeSuite) deployClusterforUpgrade(baseRookImage, objectUserID, preFilename string) (int, []string, []string) {
//
// Create block, object, and file storage before the upgrade
// The helm chart already created these though.
Expand Down Expand Up @@ -330,7 +334,7 @@ func (s *UpgradeSuite) deployClusterforUpgrade(objectUserID, preFilename string)
require.True(s.T(), created)

// verify that we're actually running the right pre-upgrade image
s.verifyOperatorImage(installer.Version1_14)
s.verifyOperatorImage(baseRookImage)

assert.NoError(s.T(), s.k8sh.WriteToPod("", rbdPodName, preFilename, simpleTestMessage))
assert.NoError(s.T(), s.k8sh.ReadFromPod("", rbdPodName, preFilename, simpleTestMessage))
Expand Down

0 comments on commit 1b76ad1

Please sign in to comment.