Skip to content

Commit

Permalink
testbed: add new OSD in Ceph (#676)
Browse files Browse the repository at this point in the history
Signed-off-by: Christian Berendt <[email protected]>
  • Loading branch information
berendt authored Sep 11, 2024
1 parent 61117e7 commit d1b9302
Showing 1 changed file with 57 additions and 0 deletions.
57 changes: 57 additions & 0 deletions docs/guides/other-guides/testbed.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -595,6 +595,63 @@ of the other cloud is changed accordingly.
| `/opt/configuration/scripts/upgrade/320-openstack-services-baremetal.sh` | |
| `/opt/configuration/scripts/upgrade/330-openstack-services-additional.sh` | |

### Add new OSD in Ceph

In the testbed, three volumes per node are provided for use by Ceph by default. Two of
these devices are used as OSDs during the initial deployment. The third device is intended
for testing the addition of a further OSD to the Ceph cluster.

1. Add `sdd` to `ceph_osd_devices` in `/opt/configuration/inventory/host_vars/testbed-node-0.testbed.osism.xyz/ceph-lvm-configuration.yml`.
The following content is an example, the IDs look different everywhere. Do not copy 1:1
but only add `sdd` to the file.

```yaml
---
#
# This is Ceph LVM configuration for testbed-node-0.testbed.osism.xyz
# generated by ceph-configure-lvm-volumes playbook.
#
ceph_osd_devices:
sdb:
osd_lvm_uuid: 95a9a2e0-b23f-55b2-a04f-e02ddfc0e82a
sdc:
osd_lvm_uuid: 29899765-42bf-557b-ae9c-5c7c984b2243
sdd:
lvm_volumes:
- data: osd-block-95a9a2e0-b23f-55b2-a04f-e02ddfc0e82a
data_vg: ceph-95a9a2e0-b23f-55b2-a04f-e02ddfc0e82a
- data: osd-block-29899765-42bf-557b-ae9c-5c7c984b2243
data_vg: ceph-29899765-42bf-557b-ae9c-5c7c984b2243
```

2. Run `osism apply ceph-configure-lvm-volumes -l testbed-node-0.testbed.osism.xyz`

3. Run `cp /tmp/testbed-node-0.testbed.osism.xyz-ceph-lvm-configuration.yml /opt/configuration/inventory/host_vars/testbed-node-0.testbed.osism.xyz/ceph-lvm-configuration.yml`.

4. Run `osism reconciler sync`

5. Run `osism apply ceph-create-lvm-devices -l testbed-node-0.testbed.osism.xyz`

6. Run `osism apply ceph-osds -l testbed-node-0.testbed.osism.xyz -e ceph_handler_osds_restart=false`

7. Check the OSD tree

```
$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.13640 root default
-3 0.05846 host testbed-node-0
2 hdd 0.01949 osd.2 up 1.00000 1.00000
4 hdd 0.01949 osd.4 up 1.00000 1.00000
6 hdd 0.01949 osd.6 up 1.00000 1.00000
-5 0.03897 host testbed-node-1
0 hdd 0.01949 osd.0 up 1.00000 1.00000
5 hdd 0.01949 osd.5 up 1.00000 1.00000
-7 0.03897 host testbed-node-2
1 hdd 0.01949 osd.1 up 1.00000 1.00000
3 hdd 0.01949 osd.3 up 1.00000 1.00000
```

### Ceph via Rook (technical preview)

Please have a look at [Deploy Guide - Services - Rook](../deploy-guide/services/rook.md) and [Configuration Guide - Rook](../configuration-guide/rook.md) for details on how to configure Rook.
Expand Down

0 comments on commit d1b9302

Please sign in to comment.