Skip to content

Commit

Permalink
Merge pull request #750 from red-hat-storage/sync_ds--master
Browse files Browse the repository at this point in the history
Syncing latest changes from master for rook
  • Loading branch information
subhamkrai authored Oct 15, 2024
2 parents 39e05d2 + f9ab905 commit 0a0f59a
Show file tree
Hide file tree
Showing 33 changed files with 3,809 additions and 2,912 deletions.
7 changes: 0 additions & 7 deletions .github/workflows/canary-test-config/action.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,6 @@ description: Cluster setup for canary test
runs:
using: "composite"
steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: true

- name: setup golang
uses: actions/setup-go@v5
with:
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/golangci-lint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
with:
go-version: "1.22"
- name: golangci-lint
uses: golangci/golangci-lint-action@aaa42aa0628b4ae2578232a66b541047968fac86 # v6.1.0
uses: golangci/golangci-lint-action@971e284b6050e8a5849b72094c50ab08da042db8 # v6.1.1
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.55
Expand All @@ -57,4 +57,4 @@ jobs:
go-version: "1.22.5"
check-latest: true
- name: govulncheck
uses: golang/govulncheck-action@dd0578b371c987f96d1185abb54344b44352bd58 # v1.0.3
uses: golang/govulncheck-action@b625fbe08f3bccbe446d94fbf87fcc875a4f50ee # v1.0.4
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,6 @@ inputs:
runs:
using: "composite"
steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tool-cache: true

- name: setup golang
uses: actions/setup-go@v5
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/scorecards.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@461ef6c76dfe95d5c364de2f431ddbd31a417628 # v3.26.9
uses: github/codeql-action/upload-sarif@6db8d6351fd0be61f9ed8ebd12ccd35dcec51fea # v3.26.11
with:
sarif_file: results.sarif
4 changes: 4 additions & 0 deletions Documentation/CRDs/Block-Storage/ceph-block-pool-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,10 @@ external-cluster-console # rbd mirror pool peer bootstrap import <token file pat

See the official rbd mirror documentation on [how to add a bootstrap peer](https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#bootstrap-peers).

!!! note
Disabling mirroring for the CephBlockPool requires disabling mirroring on all the
CephBlockPoolRadosNamespaces present underneath.

### Data spread across subdomains

Imagine the following topology with datacenters containing racks and then hosts:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,12 +50,19 @@ If any setting is unspecified, a suitable default will be used automatically.

- `blockPoolName`: The metadata name of the CephBlockPool CR where the rados namespace will be created.

- `mirroring`: Sets up mirroring of the rados namespace (requires Ceph v20 or newer)
- `mode`: mirroring mode to run, possible values are "pool" or "image" (required). Refer to the [mirroring modes Ceph documentation](https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-mirroring) for more details
- `remoteNamespace`: Name of the rados namespace on the peer cluster where the namespace should get mirrored. The default is the same rados namespace.
- `snapshotSchedules`: schedule(s) snapshot at the **rados namespace** level. It is an array and one or more schedules are supported.
- `interval`: frequency of the snapshots. The interval can be specified in days, hours, or minutes using d, h, m suffix respectively.
- `startTime`: optional, determines at what time the snapshot process starts, specified using the ISO 8601 time format.

## Creating a Storage Class

Once the RADOS namespace is created, an RBD-based StorageClass can be created to
create PVs in this RADOS namespace. For this purpose, the `clusterID` value from the
CephBlockPoolRadosNamespace status needs to be put into the `clusterID` field of the StorageClass
spec.
spec.

Extract the clusterID from the CephBlockPoolRadosNamespace CR:

Expand All @@ -81,3 +88,45 @@ parameters:
pool: replicapool
...
```

### Mirroring

First, enable mirroring for the parent CephBlockPool.

```yaml
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph
spec:
replicated:
size: 3
mirroring:
enabled: true
mode: image
# schedule(s) of snapshot
snapshotSchedules:
- interval: 24h # daily snapshots
startTime: 14:00:00-05:00
```

Second, configure the rados namespace CRD with the mirroring:

```yaml
apiVersion: ceph.rook.io/v1
kind: CephBlockPoolRadosNamespace
metadata:
name: namespace-a
namespace: rook-ceph # namespace:cluster
spec:
# The name of the CephBlockPool CR where the namespace is created.
blockPoolName: replicapool
mirroring:
mode: image
remoteNamespace: namespace-a # default is the same as the local rados namespace
# schedule(s) of snapshot
snapshotSchedules:
- interval: 24h # daily snapshots
startTime: 14:00:00-05:00
```
5 changes: 4 additions & 1 deletion Documentation/CRDs/Block-Storage/ceph-rbd-mirror-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,5 +44,8 @@ If any setting is unspecified, a suitable default will be used automatically.

### Configuring mirroring peers

Configure mirroring peers individually for each CephBlockPool. Refer to the
* Configure mirroring peers individually for each CephBlockPool. Refer to the
[CephBlockPool documentation](ceph-block-pool-crd.md#mirroring) for more detail.

* Configure mirroring peers individually for each CephBlockPoolRadosNamespace. Refer to the
[CephBlockPoolRadosNamespace documentation](ceph-block-pool-rados-namespace-crd.md#mirroring) for more detail.
110 changes: 109 additions & 1 deletion Documentation/CRDs/specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -3167,6 +3167,20 @@ string
the CephBlockPool CR.</p>
</td>
</tr>
<tr>
<td>
<code>mirroring</code><br/>
<em>
<a href="#ceph.rook.io/v1.RadosNamespaceMirroring">
RadosNamespaceMirroring
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Mirroring configuration of CephBlockPoolRadosNamespace</p>
</td>
</tr>
</table>
</td>
</tr>
Expand Down Expand Up @@ -3226,6 +3240,20 @@ string
the CephBlockPool CR.</p>
</td>
</tr>
<tr>
<td>
<code>mirroring</code><br/>
<em>
<a href="#ceph.rook.io/v1.RadosNamespaceMirroring">
RadosNamespaceMirroring
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Mirroring configuration of CephBlockPoolRadosNamespace</p>
</td>
</tr>
</tbody>
</table>
<h3 id="ceph.rook.io/v1.CephBlockPoolRadosNamespaceStatus">CephBlockPoolRadosNamespaceStatus
Expand Down Expand Up @@ -11491,6 +11519,86 @@ optional</p>
</tr>
</tbody>
</table>
<h3 id="ceph.rook.io/v1.RadosNamespaceMirroring">RadosNamespaceMirroring
</h3>
<p>
(<em>Appears on:</em><a href="#ceph.rook.io/v1.CephBlockPoolRadosNamespaceSpec">CephBlockPoolRadosNamespaceSpec</a>)
</p>
<div>
<p>RadosNamespaceMirroring represents the mirroring configuration of CephBlockPoolRadosNamespace</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>remoteNamespace</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>RemoteNamespace is the name of the CephBlockPoolRadosNamespace on the secondary cluster CephBlockPool</p>
</td>
</tr>
<tr>
<td>
<code>mode</code><br/>
<em>
<a href="#ceph.rook.io/v1.RadosNamespaceMirroringMode">
RadosNamespaceMirroringMode
</a>
</em>
</td>
<td>
<p>Mode is the mirroring mode; either pool or image</p>
</td>
</tr>
<tr>
<td>
<code>snapshotSchedules</code><br/>
<em>
<a href="#ceph.rook.io/v1.SnapshotScheduleSpec">
[]SnapshotScheduleSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SnapshotSchedules is the scheduling of snapshot for mirrored images</p>
</td>
</tr>
</tbody>
</table>
<h3 id="ceph.rook.io/v1.RadosNamespaceMirroringMode">RadosNamespaceMirroringMode
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#ceph.rook.io/v1.RadosNamespaceMirroring">RadosNamespaceMirroring</a>)
</p>
<div>
<p>RadosNamespaceMirroringMode represents the mode of the RadosNamespace</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>&#34;image&#34;</p></td>
<td><p>RadosNamespaceMirroringModeImage represents the image mode</p>
</td>
</tr><tr><td><p>&#34;pool&#34;</p></td>
<td><p>RadosNamespaceMirroringModePool represents the pool mode</p>
</td>
</tr></tbody>
</table>
<h3 id="ceph.rook.io/v1.ReadAffinitySpec">ReadAffinitySpec
</h3>
<p>
Expand Down Expand Up @@ -12155,7 +12263,7 @@ string
<h3 id="ceph.rook.io/v1.SnapshotScheduleSpec">SnapshotScheduleSpec
</h3>
<p>
(<em>Appears on:</em><a href="#ceph.rook.io/v1.FSMirroringSpec">FSMirroringSpec</a>, <a href="#ceph.rook.io/v1.MirroringSpec">MirroringSpec</a>)
(<em>Appears on:</em><a href="#ceph.rook.io/v1.FSMirroringSpec">FSMirroringSpec</a>, <a href="#ceph.rook.io/v1.MirroringSpec">MirroringSpec</a>, <a href="#ceph.rook.io/v1.RadosNamespaceMirroring">RadosNamespaceMirroring</a>)
</p>
<div>
<p>SnapshotScheduleSpec represents the snapshot scheduling settings of a mirrored pool</p>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -196,8 +196,6 @@ The erasure coded pool must be set as the `dataPool` parameter in

If a node goes down where a pod is running where a RBD RWO volume is mounted, the volume cannot automatically be mounted on another node. The node must be guaranteed to be offline before the volume can be mounted on another node.

!!! Note
These instructions are for clusters with Kubernetes version 1.26 or greater. For K8s 1.25 or older, see the [manual steps in the CSI troubleshooting guide](../../Troubleshooting/ceph-csi-common-issues.md#node-loss) to recover from the node loss.

### Configure CSI-Addons

Expand All @@ -206,6 +204,11 @@ Deploy csi-addons controller and enable `csi-addons` sidecar as mentioned in the

### Handling Node Loss

!!! warning
Automated node loss handling is currently disabled, please refer to the [manual steps](../../Troubleshooting/ceph-csi-common-issues.md#node-loss) to recover from the node loss.
We are actively working on a new design for this feature.
For more details see the [tracking issue](https://github.com/rook/rook/issues/14832).

When a node is confirmed to be down, add the following taints to the node:

```console
Expand Down
3 changes: 0 additions & 3 deletions Documentation/Troubleshooting/ceph-csi-common-issues.md
Original file line number Diff line number Diff line change
Expand Up @@ -413,9 +413,6 @@ Where `-m` is one of the mon endpoints and the `--key` is the key used by the CS

When a node is lost, you will see application pods on the node stuck in the `Terminating` state while another pod is rescheduled and is in the `ContainerCreating` state.

!!! important
For clusters with Kubernetes version 1.26 or greater, see the [improved automation](../Storage-Configuration/Block-Storage-RBD/block-storage.md#recover-rbd-rwo-volume-in-case-of-node-loss) to recover from the node loss. If using K8s 1.25 or older, continue with these instructions.

### Force deleting the pod

To force delete the pod stuck in the `Terminating` state:
Expand Down
2 changes: 2 additions & 0 deletions PendingReleaseNotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,5 @@
- Removed support for Ceph Quincy (v17) since it has reached end of life

## Features

- Enable mirroring for CephBlockPoolRadosNamespaces (see [#14701](https://github.com/rook/rook/pull/14701)).
24 changes: 24 additions & 0 deletions build/csv/ceph/ceph.rook.io_cephblockpoolradosnamespaces.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,30 @@ spec:
x-kubernetes-validations:
- message: blockPoolName is immutable
rule: self == oldSelf
mirroring:
properties:
mode:
enum:
- ""
- pool
- image
type: string
remoteNamespace:
type: string
snapshotSchedules:
items:
properties:
interval:
type: string
path:
type: string
startTime:
type: string
type: object
type: array
required:
- mode
type: object
name:
type: string
x-kubernetes-validations:
Expand Down
Loading

0 comments on commit 0a0f59a

Please sign in to comment.